I have some doubts about how correctly we can use {XOR} constraint in UML.
I understand how it works in two different ways. Which one is correct?
The xor constraint applies to the association. (either: an object of type A may be associated with 1 object of type C; or: an object of type A may be associated with zero or 1 object to type B; or: object A could be just by itself because we have [0..1] near B).
The xor constraint applies to the link (either: an object of type A must be associated with exactly one object of type C; or: an object of type A must be associated to exactly one object of type B).
After many years I have to fix this answer (though I got many upvotes for it).
The {XOR} means that class A must have either an association to B or to C but not to both or none. That means in one case you have A * - 0..1 B and in the other case it's A 0..1 - 1 C. Both are legal constructs per se. Only here it is that A will play two exclusive roles.
This is a purely academic construct, so what it means in practice is completely open. It would be more meaningful (and helpful) if such examples from tutorials/classes would have some real world connection.
Old (wrong) answer
This is simply wrong (or a puzzle). You need exactly one C to be associated with A. But then, due to the XOR you may not associate B. Which means: the B relation is always 0 and you could as well leave it away.
Maybe (!) someone has put the multiplicity on the wrong side. If you swap them, it would make sense. If you use real names rather than A, B, C you could guess from the context.
Option 2 requires a multiplicity of exactly one near B.
Option 1 is suitable in the following cases:
1 near A, 0..1 near B
0..1 near A, 0..1 near B
0..1 near A, 1 near B
xor is a Boolean operator that gives true as a result only if its two operands are one true and the other false.
The notation is used to specify that an instance of the base class must participate in exactly one of the associations grouped together by the {xor} constraint. Exactly one of the associations must always be active.
Related
Is it possible to connect a1 with b1 twice on (object diagram), while A has only one B object and it is {nonunique}?
nonunique constraint has sense only if upper multiplicity is higher than zero (while of course you are still allowed to use it anyway). It means that in a particular association with a specific object the same object on the other side can be used more than once.
I believe that's what you wanted to achieve, however the constraint should be on the other end of the association (the one with multiplicity *).
Your diagram shows just classes. Objects have an underlined name and usually do not show compartments.
The {nonunique} constraint in the multiplicity just tells that B must not be unique.
The double association between a1 and b1 is absolutely legal. However, without a role name it's rather pointless and a single association would be enough.
I have this class/object diagram:
I don’t understand why that object diagram is invalid according to the given class diagram.
In the object diagram, one C object has two links with two T objects, alpha relationship with a T object and beta relationship with another T. So I don’t think it violates the multiplicity constraints.
Could you please explain me why the object diagram is invalid?
Yours is the most interesting question I've seen here in a long time. It is pretty tricky!
The simple reason your instances are incorrect is that every instance of type T must be associated with one C. The top instance of type T in your diagram violates the constraint in association beta. (The multiplicity on the left end of the association.)
There are two faults in the object diagram.
There is only a formal fault in the object diagram, the lines in the objects diagrams between the instances are links, i.e., instances of the associations shown in the class diagram. As the links are instances, the same rules for instance naming apply as to class instances. So change alpha to :alpha and underline it, it is correct. Same for beta.
Further the links are not correct, as there is an beta link from the uppermost T instance missing. Each object of A, and as C is a specialization of A, also C (and B) objects need an alpha link to an S instance. As S is a generalized T, an alpha link between A (or one of its specializations) and S (or one of its specializations) is needed. Further each S (or T) might have arbitrary alpha links to A objects.
Each C object needs to have zero or one beta links to T instances. In the other direction, each T instance needs exactly one C instance via a beta link. This is missing for the uppermost T instance.
Leaving my prior answer below, but thinking twice, the answer is that your class diagram is incomplete.
The two alpha and beta associations have no association-end names. The fact that they have different multiplicities leads to the conclusion that they must be different associations. With names it would look like this:
Expanding the inheritance will make this:
Based on this assumption, my original answer stands like this:
The reason is that a :C has two associations alpha and beta each to another :T object. Not a single alpha to one and a single beta to another. So you need to add a beta to the alpha and vice versa.
Edit And yes, JimL. is correct. Having two alpha-links violates the constraint from the class diagram. So actually you can only have one T linked to C. Which again makes the class model very strange.
The C class has a beta-association to T. C inherits from A and T inherits from S. Since there is a alpha-association from A t0 S this is also inherited. So you have:
Suppose I have A ---r1 {bag} [1..2]--> B in a UML class diagram (that is, r1 is an association from A to B and is annotated with {bag} and multiplicity [1..2].
My Question: if a:A is an instance of A, is the following collection valid?
a.r1={(b1,1),(b1,2),(b2,1)} //collection contains two copies of b1 and one b2
In other words, multiplicity bounds (i.e., [1..2]) apply to the association when it is interpreted purely as r1:A --> B, or it applies to r1: A --> Bag(B)? In the former interpretation, the above collection is valid, since r1 contains at most two instances of B, but in the latter it is not, since r1 contains three elements of Bag(B)! which interpretation is correct?
Multiplicity constraints in UML are explained in Chapter 7.5.3 of UML document as I am referred to in this question.
p.s.1: A similar question arises when we substitute {bag} with {seq}.
p.s.2: I added haskell tag to get comment from large haskell community here as #xmojmr suggested. Thanks to #peter that nicely draw the pictures in his answer.
As stated in specs, Bag is unordered, nonunique collection.
However this describes the relation between the elements you are pointing to.
So your example can be expressed in either way:
This means that A has reference to one to two B instances, and those references are stored in a Bag (or any nonunique, unsorted collection; but that is implementation detail).
To answer your question: no, because the Bag contains three instances of B, whilst the allowed maximum is two B's.
I have one object, call it type A which has four data members of another object type, call it B. How do I show this in a UML class diagram so that its clear there are four B objects type in every A object?
Is the only solution to put "4" next to the arrow head pointing to class B?
It depends on what you want to achive, in sense of how you need to distinguish between those objects in context of their association/link, that is - what kind of role they play:
if there are all equal, no special differences in their role in context of A, them a multiplicity 4..4 will do the job, naming the association end properly (for example my_Bs)
If these object play different role in connection with A, then you can use separate associations with lower multiplicities each one, 2, 3 or even 4 pieces (for example, if B is a Wheel and A is Car, then you can put 2 associations with multiplicities 2..2 each, and call then "front" and "rear", or even 4 associations "front_left", "front_right"...)
Here is how the both cases look like. On the second one I showd different possible options (with max. 5 elements of B), just to give you an idea.
It's probably clear by now, but the fundamental concept here is the role of the association end.
Aleks answer is the best. However you can also represent the multiplicity in one box like this :
You cal also use composite structure diagram. See example below:
From my point of view, myBs defined as an attribute of type B on class A has a different meaning of myBs defined as a association's role between A and B (which is also different as defining it as a composition/aggregation).
If it is an attribute, then it's not a role. In that case, there is only a simple dependency relation between A and B, which must appear in the diagram.
I think that problem comes from unconsciously think from a "Relationnal Data (BMS)" and/or a "Object Oriented Programming" point of view, but UML is not intended for that.
:)
Consider the following Alloy model:
open util/ordering[C]
abstract sig A {}
sig B extends A {}
sig C extends A {}
pred show {}
run show for 7
I understand why, when I run show for 7, all the instances of this model have 7 atoms of signature C. (Well, that's not quite true. I understand that the ordered signature will always have as many atoms as the scope allows, because util/ordering tells me so. But that's not quite the same as why.)
But why do no instances of this model have any atoms of signature B? Is this a side-effect of the special handling performed for util/ordering? (Intended? Unintended?) Is util/ordering intended to be applied only to top-level signatures?
Or is there something else going on that I am missing?
In the model from this this is abstracted, I'd really like to have a name like A for the union of B and C, and I'd really like C to be ordered, and I'd really like B to be unordered and non-empty. At the moment, I seem to able to achieve any two of those goals; is there a way to manage all three at the same time?
[Addendum: I notice that specifying run show for 3 but 3 B, 3 C does achieve my three goals. By contrast, run show for 2 but 3 B produces no instances at all. Perhaps I need to understand the semantics of scope specifications better.]
Short answer: the phenomena reported result from the rules for default and implicit scopes; those rules are discussed in section B.7.6 of the Language Reference.
Longer answer:
The eventual suspicion that I should look at the semantics of scope specifications more closely proved to be warranted. In the example shown here, the rules work out exactly as documented:
For run show for 7, signature A has a default scope of 7; so do B and C. The use of the util/ordering module forces the number of C atoms to 7; that also exhausts the quota for signature A, which leaves signature B with an implicit scope of 0.
For run show for 2 but 3 B, signature A has a default scope of 2, and B has an explicit scope of 3. This leaves signature C with an implicit signature of 2 minus 3, or negative 1. That appears to count as an inconsistency; scope bounds are expected to be natural numbers.
For run show for 2 but 3 B, 3 C, signature A gets an implicit bound of 6 (the sum of its subsignatures' bounds).
As a way of gaining a better understanding of the scope rules, it proved useful to this user to execute all of the following commands:
run show for 3
run show for 3 but 2 C
run show for 3 but 2 B
run show for 3 but 2 B, 2 C
run show for 3 but 2 A
run show for 3 but 2 A, 2 C
run show for 3 but 2 A, 2 B
run show for 3 but 2 A, 2 B, 2 C
I'll leave this question in place for other answers and in the hope that it may help some other users.
I understand that the ordered signature will always have as many atoms as the scope allows, because util/ordering tells me so. But that's not quite the same as why.
The reason is that when forcing an ordered sig to contain as many atoms as the scope allows it is possible for the translator to generate an efficient symmetry breaking predicate, which, in most examples with ordered sigs, results in much better solving time. So it is simply a trade-off, and the design decision was to enforce this extra constraint in order to gain performance.