How to represent a method call in a loop in a sequence diagram? - uml

I know that one use a box with "loop" in the corner, when we have a loop which uses several objects. Is it the same with just one object?

Yes. You can place a loop fragment over a self-call and that simply means a repetitive call. A guard placed in square brackets (top left of the fragment) tells when the loop will end.
However, graphical programming isn't the best idea at all. For such trivial cases some note or pseudo code are better suited. In most cases that info is obvious from the context anyway and should not be presented graphically. Don't underestimate the programmers doing the coding in the end.

Related

Nested fragments UML understanding?

I have a UML book and I am reading about nested fragments. It shows an example of a nested fragment. But what I dont get.. why does it say "If the condition "cancelation not sucessful" is true when entering this fragment (i.e. cancelation was unsuccesful) the interaction within this fragment is no longer executed".
What I have learned before is that a condition should be true before the interaction will be executed? But in this case it says the opposite of it.. (because they say it should be false to execute the interaction)
See for the image: https://ibb.co/CmstLcX
I think this is simply a typo in the book. The diagram makes sense, but the text describes nonsense. While the condition is true, the messages in the loop will happen up to three times.
Maybe the author got confused, because the message immediately before the loop is Cancellation. I assume the loop guard is referring to the success of the Order cancellation message, not to this Cancellation.
By the way, reply messages need to have the name of the original message. Most people get this wrong (and some textbooks). I'll grant that the text on the reply message is often meant to be the return value. As such it should be separated with a colon from the name of the message. In your case it is probably not necessary to repeat the message name, even though techically required. However, the colon is mandatory.
If it is the return value, all reply messages in the loops return the value Acceptance. I wonder, how the guard can then evaluate to true after the first time. Maybe it is only showing the scenario for this case. This is perfectly Ok. A sequence diagram almost never shows all possible scenarios. However, then the loop doesn't make sense. I guess the author didn't mean to return a specific value.
Or maybe it is the assignment target. In this case, it should look like this: Acceptance=Order Cancellation. Then the Acceptance attribute of the Dispatcher Workstation would be filled with whatever gets returned by the Order Cancellation message. Of course, then I would expect this attribute to be used in the guard, like this [not Acceptance].
A third possiblity is, that the author didn't mean synchroneous communication and just wanted to send signals. The Acceptance Signal could well contain an attribute Cancellation not successful. Then of course, no filled arrows and no dashed lines.
I can even think of a fourth possibility. Maybe the author wanted to show the name of an out parameter of the called operation. But this would officially look like this: Order Cancellation(Acceptance). Again the name of the message could be omitted, but the round brackets are needed to make the intention clear.
I think the diagram leaves a lot more questions open than you asked.
It only says "If the cancellation was not successful then" (do exception handling). This is pretty straight forward. if not false is the same as if true. Since it is within a Break fragment, this will be performed (with what ever is inside) and as a result will break the fragment where it is contained (usually some loop). Having a break just stand alone seems a bit odd (not to say wrong).
UML 2.5 p. 581:
17.6.3.9 Break
The interactionOperator break designates that the CombinedFragment represents a breaking scenario in the sense that the operand is a scenario that is performed instead of the remainder of the enclosing InteractionFragment. A break operator with a guard is chosen when the guard is true and the rest of the enclosing Interaction Fragment is ignored. When the guard of the break operand is false, the break operand is ignored and the rest of the enclosing InteractionFragment is chosen. The choice between a break operand without a guard and the rest of the enclosing InteractionFragment is done non-deterministically.
A CombinedFragment with interactionOperator break should cover all Lifelines of the enclosing InteractionFragment.
Except for that: you should not take fragments too serious. Graphical programming is nonsense. You make only spare use of any such constructs and only if they help understanding certain behavior. Do not get tempted to re-document existing code this way. Code is much more dense and better to read. YMMV

Dependent function on velocity in UML

I am currently doing several UML tasks for practice and I got stuck on one of the tasks. In general, I want to modell moving objects. When a moving object moves, they go to a neighboring field (a field may have multiple neighbors). There are two kinds of objects, one of each behaves differently and "Object 2" moves 2 times faster then "Object 1". So basically I have to represent that the "Object 1" moves half as much as the "Object 2" at the same time. How can I make the movment dependent on velocity and shows this on the diagrams? Here is my basic class diagram and my sequence without the velocity:
I guess I should make the Move() functions dependent on the velocity too but I do not understand if that is enoguh or somehow I must represent on sequences that "Object 2" steps two times more thent "Object 1" at the same time.
Doesn't seem like something you would model in a class or sequence diagram beyond what you've already done. The fact that an object's movement speed depends on its velocity should be readily understandable if you have class called MovingObject with a velocity member and a Move() method.
If you really do feel like you need to model that, I'd look into using a UML timing diagram.
Just use a combined fragment with a loop operator that will include the getNeighbor message exchange. The parameter would be the velocity. It would look like this:
loop (velocity)
Of course you also need to handle the cases, where there are not enough neighbors left or when multiple neighbors are returned.
Some further remarks:
If the velocity is a constant, you could define it with a default value in the subclasses. Otherwise they seem to be identical and would be superfluous.
f1 seems to be the name of the association end at the field side. You should show this.
You are not using the parameter f of the move() operation. Why did you model it?

What does it mean for something to "compose well"?

Many a times, I've come across statements of the form
X does/doesn't compose well.
I can remember few instances that I've read recently :
Macros don't compose well (context: clojure)
Locks don't compose well (context: clojure)
Imperative programming doesn't compose well... etc.
I want to understand the implications of composability in terms of designing/reading/writing code ? Examples would be nice.
"Composing" functions basically just means sticking two or more functions together to make a big function that combines their functionality in a useful way. Essentially, you define a sequence of functions and pipe the results of each one into the next, finally giving the result of the whole process. Clojure provides the comp function to do this for you, you could do it by hand too.
Functions that you can chain with other functions in creative ways are more useful in general than functions that you can only call in certain conditions. For example, if we didn't have the last function and only had the traditional Lisp list functions, we could easily define last as (def last (comp first reverse)). Look at that — we didn't even need to defn or mention any arguments, because we're just piping the result of one function into another. This would not work if, for example, reverse took the imperative route of modifying the sequence in-place. Macros are problematic as well because you can't pass them to functions like comp or apply.
Composition in programming means assembling bigger pieces out of smaller ones.
Composition of unary functions creates a more complicated unary function by chaining simpler ones.
Composition of control flow constructs places control flow constructs inside other control flow constructs.
Composition of data structures combines multiple simpler data structures into a more complicated one.
Ideally, a composed unit works like a basic unit and you as a programmer do not need to be aware of the difference. If things fall short of the ideal, if something doesn't compose well, your composed program may not have the (intended) combined behavior of its individual pieces.
Suppose I have some simple C code.
void run_with_resource(void) {
Resource *r = create_resource();
do_some_work(r);
destroy_resource(r);
}
C facilitates compositional reasoning about control flow at the level of functions. I don't have to care about what actually happens inside do_some_work(); I know just by looking at this small function that every time a resource is created on line 2 with create_resource(), it will eventually be destroyed on line 4 by destroy_resource().
Well, not quite. What if create_resource() acquires a lock and destroy_resource() frees it? Then I have to worry about whether do_some_work acquires the same lock, which would prevent the function from finishing. What if do_some_work() calls longjmp(), and skips the end of my function entirely? Until I know what goes on in do_some_work(), I won't be able to predict the control flow of my function. We no longer have compositionality: we can no longer decompose the program into parts, reason about the parts independently, and carry our conclusions back to the whole program. This makes designing and debugging much harder and it's why people care whether something composes well.
"Bang for the Buck" - composing well implies a high ratio of expressiveness per rule-of-composition. Each macro introduces its own rules of composition. Each custom data structure does the same. Functions, especially those using general data structures have far fewer rules.
Assignment and other side effects, especially wrt concurrency have even more rules.
Think about when you write functions or methods. You create a group of functionality to do a specific task. When working in an Object Oriented language you cluster your behavior around the actions you think a distinct entity in the system will perform. Functional programs break away from this by encouraging authors to group functionality according to an abstraction. For example, the Clojure Ring library comprises a group of abstractions that cover routing in web applications.
Ring is composable where functions that describe paths in the system (routes) can be grouped into higher order functions (middlewhere). In fact, Clojure is so dynamic that it is possible (and you are encouraged) to come up with patterns of routes that can be dynamically created at runtime. This is the essence of composablilty, instead of coming up with patterns that solve a certain problem you focus on patterns that generate solutions to a certain class of problem. Builders and code generators are just two of the common patterns used in functional programming. Function programming is the art of patterns that generate other patterns (and so on and so on).
The idea is to solve a problem at its most basic level then come up with patterns or groups of the lowest level functions that solve the problem. Once you start to see patterns in the lowest level you've discovered composition. As folks discover second order patterns in groups of functions they may start to see a third level. And so on...
Composition (in the context you describe at a functional level) is typically the ability to feed one function into another cleanly and without intermediate processing. Such an example of composition is in std::cout in C++:
cout << each << item << links << on;
That is a simple example of composition which doesn't really "look" like composition.
Another example with a form more visibly compositional:
foo(bar(baz()));
Wikipedia Link
Composition is useful for readability and compactness, however chaining large collections of interlocking functions which can potentially return error codes or junk data can be hazardous (this is why it is best to minimize error code or null return values.)
Provided your functions use exceptions, or alternatively return null objects you can minimize the requirement for branching (if) on errors and maximize the compositional potential of your code at no extra risk.
Object composition (vs inheritance) is a separate issue (and not what you are asking, but it shares the name). It is one of containment to derive object hierarchy as opposed to direct inheritance.
Within the context of clojure, this comment addresses certain aspects of composability. In general, it seems to emerge when units of the system do one thing well, do not require other units to understand its internals, eschew side-effects, and accept and return the system's pervasive data structures. All of the above can be seen in M2tM's C++ example.
composability, applied to functions, means that the functions are smaller and well-defined, thus easy to integrate into other functions (i have seen this idea in the book "the joy of clojure")
the concept can apply to other things that are supposed be composed into something else.
the purpose of composability is reuse. for example, a function well-build (composable) is easier to reuse
macros aren't that well-composable because you can't pass them as parameters
lock are crap because you can't really give them names (define them well) or reuse them. you just do them inplace
imperative languages aren't that composable because (some of them, at least) don't have closures. if you want functionality passed as parameter, you're screwed. you have to build an object and pass that; disclaimer here: this last idea i'm not entirely convinced is true, therefore research more before taking it for granted
another idea on imperative languages is that they don't compose well because they imply state (from wikipedia knowledgebase :) "Imperative programming - describes computation in terms of statements that change a program state").
state does not compose well because although you have given a specific "something" in input, that "something" generates an output according to it's state. different internal state, different behaviour. and thus you can say good-bye to what you where expecting to happen.
with state, you depend to much on knowing what the current state of an object is... if you want to predict it's behavior. more stuff to keep in the back of your mind, less composable (remember well-defined ? or "small and simple", as in "easy to use" ?)
ps: thinking of learning clojure, huh ? investigating... ? good for you ! :P

How to represent a call being made in a loop in a sequence diagram?

I'm creating a sequence diagram, and one of the classes is being observed by another class. The observed class is calling update in the observer every 5 seconds in a loop. I need to show this in the sequence diagram. Is there a way to show it looping indefinitely out of sequence as it were?
Or does it not make sense in the context of a sequence diagram; should I not include it? Or should I include it in a different type of diagram?
You can use a box enclosing the message send arrow (and whatever else is inside the same repetitive construct).
See this tutorial for an example.
link to larger image (archived)
Just adding a clearer picture because this one at #joel.tony's answer is damn blur.
As you can see the loop happens inside the frame called loop n. There is a guard, array_size, which controls the loop's iterations.
In conclusion the sequence of the messages inside the loop n frame (those between DataControl and DataSource objects) will happen array_size times.

When to use If-else if-else over switch statements and vice versa [duplicate]

This question already has answers here:
Advantage of switch over if-else statement
(23 answers)
Eliminating `switch` statements [closed]
(23 answers)
Is there any significant difference between using if/else and switch-case in C#?
(21 answers)
Closed 2 years ago.
Why you would want to use a switch block over a series of if statements?
switch statements seem to do the same thing but take longer to type.
As with most things you should pick which to use based on the context and what is conceptually the correct way to go. A switch is really saying "pick one of these based on this variables value" but an if statement is just a series of boolean checks.
As an example, if you were doing:
int value = // some value
if (value == 1) {
doThis();
} else if (value == 2) {
doThat();
} else {
doTheOther();
}
This would be much better represented as a switch as it then makes it immediately obviously that the choice of action is occurring based on the value of "value" and not some arbitrary test.
Also, if you find yourself writing switches and if-elses and using an OO language you should be considering getting rid of them and using polymorphism to achieve the same result if possible.
Finally, regarding switch taking longer to type, I can't remember who said it but I did once read someone ask "is your typing speed really the thing that affects how quickly you code?" (paraphrased)
If you are switching on the value of a single variable then I'd use a switch every time, it's what the construct was made for.
Otherwise, stick with multiple if-else statements.
concerning Readability:
I typically prefer if/else constructs over switch statements, especially in languages that allows fall-through cases. What I've found, often, is as the projects age, and multiple developers gets involved, you'll start having trouble with the construction of a switch statement.
If they (the statements) become anything more than simple, many programmers become lazy and instead of reading the entire statement to understand it, they'll simply pop in a case to cover whatever case they're adding into the statement.
I've seen many cases where code repeats in a switch statement because a person's test was already covered, a simple fall-though case would have sufficed, but laziness forced them to add the redundant code at the end instead of trying to understand the switch. I've also seen some nightmarish switch statements with many cases that were poorly constructed, and simply trying to follow all the logic, with many fall-through cases dispersed throughout, and many cases which weren't, becomes difficult ... which kind of leads to the first/redundancy problem I talked about.
Theoretically, the same problem could exist with if/else constructs, but in practice this just doesn't seem to happen as often. Maybe (just a guess) programmers are forced to read a bit more carefully because you need to understand the, often, more complex conditions being tested within the if/else construct? If you're writing something simple that you know others are likely to never touch, and you can construct it well, then I guess it's a toss-up. In that case, whatever is more readable and feels best to you is probably the right answer because you're likely to be sustaining that code.
concerning Speed:
Switch statements often perform faster than if-else constructs (but not always). Since the possible values of a switch statement are laid out beforehand, compilers are able to optimize performance by constructing jump tables. Each condition doesn't have to be tested as in an if/else construct (well, until you find the right one, anyway).
However this isn't always the case, though. If you have a simple switch, say, with possible values of 1 to 10, this will be the case. The more values you add requires the jump tables to be larger and the switch becomes less efficient (not than an if/else, but less efficient than the comparatively simple switch statement). Also, if the values are highly variant ( i.e. instead of 1 to 10, you have 10 possible values of, say, 1, 1000, 10000, 100000, and so on to 100000000000), the switch is less efficient than in the simpler case.
Hope this helps.
Switch statements are far easier to read and maintain, hands down. And are usually faster and less error prone.
Use switch every time you have more than 2 conditions on a single variable, take weekdays for example, if you have a different action for every weekday you should use a switch.
Other situations (multiple variables or complex if clauses you should Ifs, but there isn't a rule on where to use each.
I personally prefer to see switch statements over too many nested if-elses because they can be much easier to read. Switches are also better in readability terms for showing a state.
See also the comment in this post regarding pacman ifs.
This depends very much on the specific case. Preferably, I think one should use the switch over the if-else if there are many nested if-elses.
The question is how much is many?
Yesterday I was asking myself the same question:
public enum ProgramType {
NEW, OLD
}
if (progType == OLD) {
// ...
} else if (progType == NEW) {
// ...
}
if (progType == OLD) {
// ...
} else {
// ...
}
switch (progType) {
case OLD:
// ...
break;
case NEW:
// ...
break;
default:
break;
}
In this case, the 1st if has an unnecessary second test. The 2nd feels a little bad because it hides the NEW.
I ended up choosing the switch because it just reads better.
I have often thought that using elseif and dropping through case instances (where the language permits) are code odours, if not smells.
For myself, I have normally found that nested (if/then/else)s usually reflect things better than elseifs, and that for mutually exclusive cases (often where one combination of attributes takes precedence over another), case or something similar is clearer to read two years later.
I think the select statement used by Rexx is a particularly good example of how to do "Case" well (no drop-throughs) (silly example):
Select
When (Vehicle ¬= "Car") Then
Name = "Red Bus"
When (Colour == "Red") Then
Name = "Ferrari"
Otherwise
Name = "Plain old other car"
End
Oh, and if the optimisation isn't up to it, get a new compiler or language...
The tendency to avoid stuff because it takes longer to type is a bad thing, try to root it out. That said, overly verbose things are also difficult to read, so small and simple is important, but it's readability not writability that's important. Concise one-liners can often be more difficult to read than a simple well laid out 3 or 4 lines.
Use whichever construct best descibes the logic of the operation.
Let's say you have decided to use switch as you are only working on a single variable which can have different values. If this would result in a small switch statement (2-3 cases), I'd say that is fine. If it seems you will end up with more I would recommend using polymorphism instead. An AbstractFactory pattern could be used here to create an object that would perform whatever action you were trying to do in the switches. The ugly switch statement will be abstracted away and you end up with cleaner code.

Resources