I'm curious as to when evaluation sets in, apparently certain operators are rather transformed into clauses than evaluated:
abstract sig Element {}
one sig A,B,C extend Element {}
one sig Test {
test: set Element
}
pred Test1 { Test.test = A+B }
pred Test2 { Test.test = Element-C }
and run it for Test1 and Test2 respectively will give different number of vars/clauses, specifically:
Test1: 0 vars, 0 primary vars, 0 clauses
Test2: 5 vars, 3 primary vars, 4 clauses
So although Element is abstract and all its members and their cardinalities are known, the difference seems not to be computed in advance, while the sum is. I don't want to make any assumptions, so I'm interested in why that is. Is the + operator special?
To give some context, I tried to limit the domain of a relation and found, that using only + seems to be more efficient, even when the sets are completely known in advance.
To give some context, I tried to limit the domain of a relation and found, that using only + seems to be more efficient, even when the sets are completely known in advance.
That is pretty much the right conclusion. The reason is the fact that the Alloy Analyzer tries to infer relation bounds from certain Alloy idioms. It uses a conservative approximation that is always sound for set union and product, but not for set difference. That's why for Test1 in the example above the Alloy Analyzer infers a fixed bound for the test relation (this/Test.test: [[[A$0], [B$0]]]) so no solver needs to be invoked; for Test2, the bound for the test relation cannot be shrunk so is set to be the most permissive (this/Test.test: [[], [[A$0], [B$0], [C$0]]]), thus a solver needs to be invoked to find a solution satisfying the constraints given the bounds.
Related
A poster asked how to compare functions in Alloy. While testing a small example (comparing predicates instead of functions) to answer the question with, I've noticed the following behavior, which puzzles me.
The analyzer finds no counterexamples whenever the boundary of the check command is higher than 3 and the fact 'f1' is active. Inactivating the fact, the analyzer works as expected. Why does the redundant fact 'f1' modify the analyzer's operation so and why just in the case the boundary is higher than 3?
open util/ordering [V]
sig V {}
fact f1 {
# V > 0
}
pred p1 [x: V] {
x = last
}
pred p2 [x: V] {
x = first
}
assert a1 {
all x: V | p1[x] <=> p2[x]
}
check a1 for 3
It appears that whenever the check boundary is 4 or higher and 'f1' is active the analyzer reports '0 vars. 0 primary vars. 0 clauses.'
I am unable at the moment to look into the details, but it seems likely that you're seeing overflow behavior, based in part on the fact that Alloy's integers are very narrow (4 bits by default, I believe?) twos-complement integers, so overflow happens regularly.
Several changes might be instructive, separately or together, to see if they affect the behavior.
replace fact f1 with some V
turn on the "Forbid Overflow" option
provide an explicit bit width for Int using the scope command (for other signatures, the scope number specifies a maximum number of instances; for Int it specifies a bit width)
As Loïc Gammaitoni has put it in another question here "You should always be careful when playing with numbers in Alloy".
The following model produces instances with exactly 2 address relations when the number of Books is limited to 1, however, if more Books are allowed it will create instances with 0-3 address relations. My misunderstanding of how Alloy works?
sig Name{}
sig Addr{}
sig Book { addr: Name -> lone Addr }
pred show(b:Book) { #b.addr = 2 }
// nr. of address relations in every Book should be 2
run show for 3 but 2 Book
// works as expected with 1 Book
Each instance of show should include one Book, labeled as being the b of show, which has two address pairs. But show does not say that every book must have two address pairs, only that at least one must have two address pairs.
[Postscript]
When you ask Alloy to show you an instance of a predicate, for example by the command run show, then Alloy should show you an instance: that is (quoting section 5.2.1 of Software abstractions, which you already have open) "an assignment of values to the variables of the constraint for which the constraint evaluates to true." In any given universe, there may be many other possible assignments of values to the variables for which the constraint evaluates to false; the existence of such possible non-suitable assignments is unavoidable in any universe with more than one atom.
Informally, we can think of a run command for a predicate P with arguments X, Y, Z as requesting that Alloy show us universes which satisfy the expression
some X, Y, Z : univ | P[X, Y, Z]
The run command does not amount to the expression
all X, Y, Z : univ | P[X, Y, Z]
If you want to see only universes in which every book has two pairs in its addr relation, then say so:
pred all_books_have_2 { all b : Book | #b.addr = 2 }
I think it's better that run have implicit existential quantification, rather than implicit universal quantification. One way to see why is to imagine a model that defines trees, such as:
sig Node { parent : lone Node }
fact parent_acyclic { no n : Node | n in n.^parent }
Suppose we get tired of seeing universes in which every tree is trivial and contains a single node. I'd like to be able to define a predicate that guarantees at least one tree with depth greater than 1, by writing
pred nontrivial[n : Node]{ some n.parent }
Given the constraint that trees be acyclic, there can never be a non-empty universe in which the predicate nontrivial holds for all nodes. So if run and pred had the semantics you have been supposing, we could not use nontrivial to find universes containing non-trivial trees.
I hope this helps.
Is there a typed programming language where I can constrain types like the following two examples?
A Probability is a floating point number with minimum value 0.0 and maximum value 1.0.
type Probability subtype of float
where
max_value = 0.0
min_value = 1.0
A Discrete Probability Distribution is a map, where: the keys should all be the same type, the values are all Probabilities, and the sum of the values = 1.0.
type DPD<K> subtype of map<K, Probability>
where
sum(values) = 1.0
As far as I understand, this is not possible with Haskell or Agda.
What you want is called refinement types.
It's possible to define Probability in Agda: Prob.agda
The probability mass function type, with sum condition is defined at line 264.
There are languages with more direct refinement types than in Agda, for example ATS
You can do this in Haskell with Liquid Haskell which extends Haskell with refinement types. The predicates are managed by an SMT solver at compile time which means that the proofs are fully automatic but the logic you can use is limited by what the SMT solver handles. (Happily, modern SMT solvers are reasonably versatile!)
One problem is that I don't think Liquid Haskell currently supports floats. If it doesn't though, it should be possible to rectify because there are theories of floating point numbers for SMT solvers. You could also pretend floating point numbers were actually rational (or even use Rational in Haskell!). With this in mind, your first type could look like this:
{p : Float | p >= 0 && p <= 1}
Your second type would be a bit harder to encode, especially because maps are an abstract type that's hard to reason about. If you used a list of pairs instead of a map, you could write a "measure" like this:
measure total :: [(a, Float)] -> Float
total [] = 0
total ((_, p):ps) = p + probDist ps
(You might want to wrap [] in a newtype too.)
Now you can use total in a refinement to constrain a list:
{dist: [(a, Float)] | total dist == 1}
The neat trick with Liquid Haskell is that all the reasoning is automated for you at compile time, in return for using a somewhat constrained logic. (Measures like total are also very constrained in how they can be written—it's a small subset of Haskell with rules like "exactly one case per constructor".) This means that refinement types in this style are less powerful but much easier to use than full-on dependent types, making them more practical.
Perl6 has a notion of "type subsets" which can add arbitrary conditions to create a "sub type."
For your question specifically:
subset Probability of Real where 0 .. 1;
and
role DPD[::T] {
has Map[T, Probability] $.map
where [+](.values) == 1; # calls `.values` on Map
}
(note: in current implementations, the "where" part is checked at run-time, but since "real types" are checked at compile-time (that includes your classes), and since there are pure annotations (is pure) inside the std (which is mostly perl6) (those are also on operators like *, etc), it's only a matter of effort put into it (and it shouldn't be much more).
More generally:
# (%% is the "divisible by", which we can negate, becoming "!%%")
subset Even of Int where * %% 2; # * creates a closure around its expression
subset Odd of Int where -> $n { $n !%% 2 } # using a real "closure" ("pointy block")
Then you can check if a number matches with the Smart Matching operator ~~:
say 4 ~~ Even; # True
say 4 ~~ Odd; # False
say 5 ~~ Odd; # True
And, thanks to multi subs (or multi whatever, really – multi methods or others), we can dispatch based on that:
multi say-parity(Odd $n) { say "Number $n is odd" }
multi say-parity(Even) { say "This number is even" } # we don't name the argument, we just put its type
#Also, the last semicolon in a block is optional
Nimrod is a new language that supports this concept. They are called Subranges. Here is an example. You can learn more about the language here link
type
TSubrange = range[0..5]
For the first part, yes, that would be Pascal, which has integer subranges.
The Whiley language supports something very much like what you are saying. For example:
type natural is (int x) where x >= 0
type probability is (real x) where 0.0 <= x && x <= 1.0
These types can also be implemented as pre-/post-conditions like so:
function abs(int x) => (int r)
ensures r >= 0:
//
if x >= 0:
return x
else:
return -x
The language is very expressive. These invariants and pre-/post-conditions are verified statically using an SMT solver. This handles examples like the above very well, but currently struggles with more complex examples involving arrays and loop invariants.
For anyone interested, I thought I'd add an example of how you might solve this in Nim as of 2019.
The first part of the questions is straightfoward, since in the interval since since this question was asked, Nim has gained the ability to generate subrange types on floats (as well as ordinal and enum types). The code below defines two new float subranges types, Probability and ProbOne.
The second part of the question is more tricky -- defining a type with constrains on a function of it's fields. My proposed solution doesn't directly define such a type but instead uses a macro (makePmf) to tie the creation of a constant Table[T,Probability] object to the ability to create a valid ProbOne object (thus ensuring that the PMF is valid). The makePmf macro is evaluated at compile time, ensuring that you can't create an invalid PMF table.
Note that I'm a relative newcomer to Nim so this may not be the most idiomatic way to write this macro:
import macros, tables
type
Probability = range[0.0 .. 1.0]
ProbOne = range[1.0..1.0]
macro makePmf(name: untyped, tbl: untyped): untyped =
## Construct a Table[T, Probability] ensuring
## Sum(Probabilities) == 1.0
# helper templates
template asTable(tc: untyped): untyped =
tc.toTable
template asProb(f: float): untyped =
Probability(f)
# ensure that passed value is already is already
# a table constructor
tbl.expectKind nnkTableConstr
var
totprob: Probability = 0.0
fval: float
newtbl = newTree(nnkTableConstr)
# create Table[T, Probability]
for child in tbl:
child.expectKind nnkExprColonExpr
child[1].expectKind nnkFloatLit
fval = floatVal(child[1])
totprob += Probability(fval)
newtbl.add(newColonExpr(child[0], getAst(asProb(fval))))
# this serves as the check that probs sum to 1.0
discard ProbOne(totprob)
result = newStmtList(newConstStmt(name, getAst(asTable(newtbl))))
makePmf(uniformpmf, {"A": 0.25, "B": 0.25, "C": 0.25, "D": 0.25})
# this static block will show that the macro was evaluated at compile time
static:
echo uniformpmf
# the following invalid PMF won't compile
# makePmf(invalidpmf, {"A": 0.25, "B": 0.25, "C": 0.25, "D": 0.15})
Note: A cool benefit of using a macro is that nimsuggest (as integrated into VS Code) will even highlight attempts to create an invalid Pmf table.
Modula 3 has subrange types. (Subranges of ordinals.) So for your Example 1, if you're willing to map probability to an integer range of some precision, you could use this:
TYPE PROBABILITY = [0..100]
Add significant digits as necessary.
Ref: More about subrange ordinals here.
i need to model hydrocarbon structure using alloy
basically i need to design alkane, alkene and alkyne groups
i have created following signatures(alkene example)
sig Hydrogen{}
sig Carbon{}
sig alkenegrp{
c:one Carbon,
h:set Hydrogen,
doublebond:lone alkenegrp
}
sig alkene{
unit : set alkenegrp
}
fact{
all a:alkenegrp|a not in a.doublebond.*doublebond
all a:alkenegrp|#a.h=mul[#(a.c),2]
}
pred show_alkene{
#alkene>1
}
run show_alkene
this works from alkene but when ever i try to design the same for alkane or alkyne by changing the fact like all a:alkynegrp|#a.h=minus[mul[#(a.c),2],2] it doesnt work.
Can anyone suggest how do i implement it?
My problem statement is
In Organic chemistry saturated hydrocarbons are organic compound composed entirely of single
bonds and are saturated with hydrogen. The general formula for saturated hydrocarbons is
CnH2n+2(assuming non-cyclic structures). Also called as alkanes. Unsaturated hydrocarbons
have one or more double or triple bonds between carbon atoms. Those with double bond are
called alkenes. Those with one double bond have the formula CnH2n (assuming non-cyclic
structures). Those containing triple bonds are called alkynes, with general formula CnH2n-2.
Model hydrocarbons and give predicates to generate instances of alkane, alkene and alkyne.
We have tried as:
sig Hydrogen{}
sig Carbon{}
sig alkane{
c:one Carbon,
h:set Hydrogen,
n:lone alkane
}
fact{
//(#h)=add [mul[(#c),2],2]
//all a:alkane|a not in a.*n
all a:alkane|#a.h=mul[#(a.c),2]
}
pred show_alkane(){}
run show_alkan
e
General formula for alkane is CnH2n+2,for multiplication we can use mul inbuilt function but we can not write for addtion as we have to do CnH2n+2.What should we write so that it can work for alkane
I understand alkanes, alkenes, and alkynes a little better now, but I still don't understand why you think your Alloy model doesn't work.
To express the CnH2n-2 constraint, you can certainly write what you suggested
all a:alkynegrp |
#a.h = minus[mul[#(a.c), 2], 2]
The problem is only that in your alkane sig declaration you said c: one Carbon, which is going to fix the number of carbon atoms to exactly 1, so minus[mul[#(a.c), 2], 2] is always going to evaluate to exactly 0. I assume you want to alloy for any number of carbons (since Cn) so you should change it from c: one Carbon to c: set Carbon. If you then run the show_alkane predicate, you should get some instances where the number of carbons is greater than 1 and thus, the number of hydrogens is greater than 0.
Also, for the alkane formula
all a:alkynegrp |
#a.h = plus[mul[#(a.c), 2], 2]
the default scope of 3 will not suffice, because you will need at least 4 atoms of hydrogen when a.c is non-empty, but you can fix that by explicitly giving a scope
run show_alkane for 8
If this wasn't the problem you were talking about, please be more specific about why you think "it doesn't work", i.e., what is it that you expect Alloy to do and what is it that Alloy actually does.
Can someone please help me understand predicates using the following example:
sig Light{}
sig LightState { color: Light -> one Color}
sig Junction {lights: set Light}
fun redLigths(s:LightState) : set Light{ s.color.Red}
pred mostlyRed(s:LightState, j:Junction){
lone j.lights - redLigths(s)
}
I have the below questions about the above code:
1) What happens if the above predicate is true?
2) What happends if it is false?
3) Can someone show me a bit of alloy code that uses the above code and clarifies the meaning of predicates through the code.
I am just trying to understand how do we use the above predicate.
Nothing "happens" until you place a call to a predicate or a function in a command to find an example or counterexample.
First, use the right terminology, nothing 'happens' when a predicate is true; it's the more like the other way around, an instance (an allocation of atoms to sets) satisfies (or doesn't) some condition, making the predicate true (or false).
Also, your model is incomplete, because there is no sig declaration for Color (which should include an attribute called Red).
I assume you want to model a world with crossroads containing traffic lights, if so I would use the following model:
abstract sig Color {}
one sig Red,Yellow,Green extends Color {}
sig Light {
color: Color
}
sig Junction {
lights : set Light
}
// This is just for realism, make sure each light belongs to exactly one junction
fact {
Light = Junction.lights
no x,y:Junction | x!=y and some x.lights & y.lights
}
fun count[j:Junction, c:Color] : Int {
#{x:Light | x in j.lights and x.color=c}
}
pred mostly[j:Junction, c:Color] {
no cc:Color | cc!=c and count[j,cc]>=count[j,c]
}
run{
some j:Junction | mostly[j,Red]
} for 10 Light, 2 Junction, 10 int
Looking at the above, i'm using the # operator to count the number of atoms in a set, and I'm specifying a bitwidth of 10 to integers just so that I don't stumble into an overflow when using the # operator for large sets.
When you execute this, you will get an instance with at least one junction that has mostly red lights, it will be marked as $j in the visualizer.
Hope this helps.
sig Light{}
sig LightState { color: Light -> one Color}
sig Junction {lights: set Light}
fun redLigths(s:LightState) : set Light{ s.color.Red}
pred mostlyRed(s:LightState, j:Junction){
lone j.lights - redLigths(s)
}
What the predicate simply means in the example you gave is;
The difference between the set A, in this case the relation (j.lights) and another set say B, returned from the function redligths, of which the Predicate will always constraint the constraint analyser to return only red light when you run the Predicate "mostlyRed".
And note that the multiplicity "lone" you added to the predicate's body only evaluate after the difference between the set A and B (as I assumed) has been evaluated, to make sure that at most one atom of red is returned. I hope my explanation was helpful. I will welcome positive criticism. Thanks