The following model is ok, Alloy finds instances.
abstract sig A{}
sig B extends A{}
sig C extends A{}
run {} for 1 but exactly 1 B, exactly 1 C
That makes me understand that the scope is not limited by the top-level signature A, but by its extensions, B and C.
However, I have a large model (no sense posting it here) that can only be satisfied with the scope of 14. With a scope of 13 the analyzer finds no instances.
When I analyze the instance found, using the evaluator to request 'univ', I get a solution that has about 5 atoms of each signature. Only the top-level abstract signatures have 14 atoms.
Am I missing something about scope? Does it affect something else besides the signatures (such as predicates)? Does it behave differently than what I assumed with the toy example?
Why won't my model simulate with a scope of 5?
edit:
here is my model if anyone is interested in taking a look. It is the result of model transformation, that's why legibility is an issue http://pastebin.com/17Z00wV4
edit2:
the predicate below works. If I run the predicate for 5 but don't specify the other ranges explicitly, it doesn't find instances.
run story3 for 5 but exactly 4 World, exactly 4 kPerson,
exactly 0 kSoftwareTool, exactly 1 kSourceCode,
exactly 1 kDocument, exactly 1 kDiagram, exactly 3 kChange,
exactly 1 kProject, exactly 2 coBranch, exactly 1 coRepository,
exactly 3 modeConfiguration, exactly 2 modeAtomicVersion,
exactly 2 relatorChangeRequest, exactly 0 relatorVerification,
exactly 1 relatorCheckIn, exactly 1 relatorCheckOut,
exactly 2 relatorConfigurationSelection,
exactly 1 relatorModification,
exactly 0 relatorRequestEvaluation, exactly 2 relatorMarkup
this one does not (it's the same predicate, but without the "exactly" keywords
run story3 for 5 but exactly 4 World, 4 kPerson, 1 kSourceCode,
1 kDocument, 1 kDiagram, 3 kChange, 1 kProject, 2 coBranch,
1 coRepository, 3 modeConfiguration, 2 modeAtomicVersion,
2 relatorChangeRequest, 1 relatorCheckIn, 1 relatorCheckOut,
2 relatorConfigurationSelection, 1 relatorModification,
2 relatorMarkup
I was told Alloy would find any possible instances within the defined scope so
run story3 for 5
should work too!
If each of the signature extending another one have a well defined scope, (it is the case for the small exemple you gave, then the the analyzer is "smart enough" to understand that the scope of the top level signature is at least equal to the some of the scopes of the signatures partitionning it.
In the case now you do not give any scopes to specific signatures, I assume that the analyzer won't be able to process the scope of the top signature as detailed bellow, the top level signature hence will have as scope the global one you gave.
Related
my code is this:
but when I execute this.it show me only one house and one mohre.
what should I do???
abstract one sig board{}
sig mohre {live:one state }
sig house extends board{ver:one Int,hor:one Int,mo: mohre }
enum state{alive,dead}
run{#house>10 and #mohre>8}
Your run does not specify a scope. The default scope is 3 atoms of each sig and 16 integers ([-8..7]).
The US of cardinality 10 is therefore out if scope. Basically those models are in lala land. If you lower the cardinality or increase the scope things should work.
run{#house>10 and #mohre>8} for 12 but 5 int
This command allows 12 atoms of all types and has 32 integers. Weirdly, the integers are specified by their bit width and 5 bits gives you 32 values.
Additionally, you put a constraint on the abstract sig one board. Remove the one since that prevents a solution with more than one house.
I am a beginner in Alloy (The Modelling Language made by MIT). I am trying to model the leasing of a 2 bedroom apartment in Alloy. I am trying to add a fact such that the number of people in each leased apartment is not more than 4. However the instance generated on running, still shows only one 2 bedroom leased apartment having 10 occupants. What am I doing wrong? Also if possible could someone point to some good resources on learning Alloy apart from the tutorial on the MIT website? Thanks.
abstract sig apartment {}
sig twoLeased extends apartment {
occupants: some People
} { #occupants < 5 }
sig twoUnleased extends apartment {
}
sig People {}
run {} for 3 but 4 twoLeased, 10 People
By default the bit width used to represent integers is 4 so your instance contains integers ranging from -8 to 7. In an instance where the number of occupant is 10, an integer overflow thus occur (as 10>8), #occupants returning a negative number it is thus inferior to 5 hence satisfying your invariant.
To fix this issue, you can either forbid integer overflow in the Alloy Analyzer settings or increase the bit width used to represent integers (e.g. run {} for 6 Int).
I have some doubts about how correctly we can use {XOR} constraint in UML.
I understand how it works in two different ways. Which one is correct?
The xor constraint applies to the association. (either: an object of type A may be associated with 1 object of type C; or: an object of type A may be associated with zero or 1 object to type B; or: object A could be just by itself because we have [0..1] near B).
The xor constraint applies to the link (either: an object of type A must be associated with exactly one object of type C; or: an object of type A must be associated to exactly one object of type B).
After many years I have to fix this answer (though I got many upvotes for it).
The {XOR} means that class A must have either an association to B or to C but not to both or none. That means in one case you have A * - 0..1 B and in the other case it's A 0..1 - 1 C. Both are legal constructs per se. Only here it is that A will play two exclusive roles.
This is a purely academic construct, so what it means in practice is completely open. It would be more meaningful (and helpful) if such examples from tutorials/classes would have some real world connection.
Old (wrong) answer
This is simply wrong (or a puzzle). You need exactly one C to be associated with A. But then, due to the XOR you may not associate B. Which means: the B relation is always 0 and you could as well leave it away.
Maybe (!) someone has put the multiplicity on the wrong side. If you swap them, it would make sense. If you use real names rather than A, B, C you could guess from the context.
Option 2 requires a multiplicity of exactly one near B.
Option 1 is suitable in the following cases:
1 near A, 0..1 near B
0..1 near A, 0..1 near B
0..1 near A, 1 near B
xor is a Boolean operator that gives true as a result only if its two operands are one true and the other false.
The notation is used to specify that an instance of the base class must participate in exactly one of the associations grouped together by the {xor} constraint. Exactly one of the associations must always be active.
If you give the inverse of Base (#.^:_1) a list as the left argument, it will produce the same result as Antibase (#:):
24 60 (#.^:_1) 123456
17 36
24 60 (#:) 123456
17 36
If you give Antibase (#:) a single left argument, it duplicates the Residue (|), rather than the inverse of Base (#.^:_1):
8 #: 1234
2
8 | 1234
2
8 (#.^:_1) 1234
2 3 2 2
Under what circumstances would the behavior of Antibase be superior to an inverted Base? And why wouldn't you just use Residue in those places? I feel like I must be missing something obvious about the utility of Antibase's behavior.
To start with: the J Dictionary defines #.^:_1 to be equivalent to #:, so it shouldn't be surprising that they're (mostly) interchangeable. In particular, the Vocabulary page for #: says :
r&#: is inverse to r&#."
And this theoretical equivalence is also supported in practice. If you ask the implementation of J for the its definition of #.^:_1, using the super-cool adverb b., you'll get:
24 60 60&#. b._1
24 60 60&#:
Here, we can see that all #.^:_1 is doing is deferring to #:. They're defined to be equivalent, and now we can see #.^:_1 -- at least in the case of a non-scalar LHA¹ -- is simply passing its arguments through to #:.
So how do we explain the discrepancy you observed? Well it turns out that, even in the pure halls of J, theory differs from practice. There is an inconsistency between dyads #: and #.^:_1 and, at least in the case of scalar left arguments, the behavior of the latter is superior to the former.
I would (and have) argue that this discrepancy is a bug: the Dictionary, quoted above, states the two dyads are equivalent, but that assertion is wrong when 0-:#$r (i.e. r is a scalar). Take r=.2 for example: (r&#: -: r&#.^:_1) 1 2 3 does not hold. That is, if the Dictionary's assertion (quoted above) is true, that statement should return 1 (true), but it actually returns 0 (false).
But, as you pointed out, it is a useful bug. Which is to say: I'd prefer the definition of #: were changed to match #.^:_1, rather than vice-versa. But that's the only time #.^:_1 is more convenient than #:. In all other cases, they're equivalent, and because #: is a primitive and #.^:_1 is compound phrase with a trailing _1, the former is much more convenient.
For example, when your right-hand argument is a numeric literal, it's easy to get that inadvertently attached to the _1 in #.^:_1, as in 2 2 2 2 #.^:_1 15 7 4 5, which will raise an error (because _1 15 7 4 5 is lexed as a single word, and therefore taken, as a whole, to be the argument to ^:). There are ways to address this, but none of them are as convenient or simple as using #:.
You could make a counterargument that in most cases, the LHA will be a scalar. That's an empirical argument, which will vary from codebase to codebase, but I personally see a lot of cases like 24 60 60 #: ..., where I'm trying to break up timestamps into duration buckets (hours, minutes, seconds), or (8#2)#: ..., where I'm trying explode bytes into exactly 8-bit vectors (contrasted to, e.g., 8 #.^:_1 ..., which will break bytes into as many bits as it takes, whether that's 8 or 3 or 17¹). And I'd further argue that in the J community, these are both commonly-used and instantly-recognizable idioms, so the use of #: assists with clarity and team communication.
But, bugs notwithstanding, ultimately #: and #.^:_1 are defined to be equivalent, so which one you use is really a matter of taste. (Then why define #.^:_1 at all, you ask? Well, that's a whole 'nother story.)
¹ PS: Wanna see something cool? How does #.^:_1 achieve its magic for scalar LHAs? Let's just ask J!
2&#. b._1
($&2#>:#(2&(<.#^.))#(1&>.)#(>./)#:|#, #: ]) :.(2&#.)
First off, notice the (by now) completely unsurprising use of #:. All #.^:_1 is doing is calculating the appropriate LHA for #:.
Second, the phrase $&2#>:#(2&(<.#^.))#(1&>.)#(>./)#:|#, shows you how J calculates the number of digits required to represent (the maximum value of) the y in the base (or radix) x. And it's a useful phrase unto itself, so much so that I keep a version of it around in my personal utility library:
ndr =: 10&$: :(>:#<.#^. (1 >. >./#:|#,))
ndr 1 10 100 101 NB. Number Digits Required in [default] base 10
3
16 ndr 1 10 100 101 NB. Number Digits Required in hexadecimal
2
Perhaps not an overwhelmingly compelling application but,
(4 # 256) #: 8234092340238420938420394820394820349820349820349x
is 10x faster than
256 #. inv (2^32x) | 8234092340238420938420394820394820349820349820349x
Consider the following Alloy model:
open util/ordering[C]
abstract sig A {}
sig B extends A {}
sig C extends A {}
pred show {}
run show for 7
I understand why, when I run show for 7, all the instances of this model have 7 atoms of signature C. (Well, that's not quite true. I understand that the ordered signature will always have as many atoms as the scope allows, because util/ordering tells me so. But that's not quite the same as why.)
But why do no instances of this model have any atoms of signature B? Is this a side-effect of the special handling performed for util/ordering? (Intended? Unintended?) Is util/ordering intended to be applied only to top-level signatures?
Or is there something else going on that I am missing?
In the model from this this is abstracted, I'd really like to have a name like A for the union of B and C, and I'd really like C to be ordered, and I'd really like B to be unordered and non-empty. At the moment, I seem to able to achieve any two of those goals; is there a way to manage all three at the same time?
[Addendum: I notice that specifying run show for 3 but 3 B, 3 C does achieve my three goals. By contrast, run show for 2 but 3 B produces no instances at all. Perhaps I need to understand the semantics of scope specifications better.]
Short answer: the phenomena reported result from the rules for default and implicit scopes; those rules are discussed in section B.7.6 of the Language Reference.
Longer answer:
The eventual suspicion that I should look at the semantics of scope specifications more closely proved to be warranted. In the example shown here, the rules work out exactly as documented:
For run show for 7, signature A has a default scope of 7; so do B and C. The use of the util/ordering module forces the number of C atoms to 7; that also exhausts the quota for signature A, which leaves signature B with an implicit scope of 0.
For run show for 2 but 3 B, signature A has a default scope of 2, and B has an explicit scope of 3. This leaves signature C with an implicit signature of 2 minus 3, or negative 1. That appears to count as an inconsistency; scope bounds are expected to be natural numbers.
For run show for 2 but 3 B, 3 C, signature A gets an implicit bound of 6 (the sum of its subsignatures' bounds).
As a way of gaining a better understanding of the scope rules, it proved useful to this user to execute all of the following commands:
run show for 3
run show for 3 but 2 C
run show for 3 but 2 B
run show for 3 but 2 B, 2 C
run show for 3 but 2 A
run show for 3 but 2 A, 2 C
run show for 3 but 2 A, 2 B
run show for 3 but 2 A, 2 B, 2 C
I'll leave this question in place for other answers and in the hope that it may help some other users.
I understand that the ordered signature will always have as many atoms as the scope allows, because util/ordering tells me so. But that's not quite the same as why.
The reason is that when forcing an ordered sig to contain as many atoms as the scope allows it is possible for the translator to generate an efficient symmetry breaking predicate, which, in most examples with ordered sigs, results in much better solving time. So it is simply a trade-off, and the design decision was to enforce this extra constraint in order to gain performance.