i need to model hydrocarbon structure using alloy
basically i need to design alkane, alkene and alkyne groups
i have created following signatures(alkene example)
sig Hydrogen{}
sig Carbon{}
sig alkenegrp{
c:one Carbon,
h:set Hydrogen,
doublebond:lone alkenegrp
}
sig alkene{
unit : set alkenegrp
}
fact{
all a:alkenegrp|a not in a.doublebond.*doublebond
all a:alkenegrp|#a.h=mul[#(a.c),2]
}
pred show_alkene{
#alkene>1
}
run show_alkene
this works from alkene but when ever i try to design the same for alkane or alkyne by changing the fact like all a:alkynegrp|#a.h=minus[mul[#(a.c),2],2] it doesnt work.
Can anyone suggest how do i implement it?
My problem statement is
In Organic chemistry saturated hydrocarbons are organic compound composed entirely of single
bonds and are saturated with hydrogen. The general formula for saturated hydrocarbons is
CnH2n+2(assuming non-cyclic structures). Also called as alkanes. Unsaturated hydrocarbons
have one or more double or triple bonds between carbon atoms. Those with double bond are
called alkenes. Those with one double bond have the formula CnH2n (assuming non-cyclic
structures). Those containing triple bonds are called alkynes, with general formula CnH2n-2.
Model hydrocarbons and give predicates to generate instances of alkane, alkene and alkyne.
We have tried as:
sig Hydrogen{}
sig Carbon{}
sig alkane{
c:one Carbon,
h:set Hydrogen,
n:lone alkane
}
fact{
//(#h)=add [mul[(#c),2],2]
//all a:alkane|a not in a.*n
all a:alkane|#a.h=mul[#(a.c),2]
}
pred show_alkane(){}
run show_alkan
e
General formula for alkane is CnH2n+2,for multiplication we can use mul inbuilt function but we can not write for addtion as we have to do CnH2n+2.What should we write so that it can work for alkane
I understand alkanes, alkenes, and alkynes a little better now, but I still don't understand why you think your Alloy model doesn't work.
To express the CnH2n-2 constraint, you can certainly write what you suggested
all a:alkynegrp |
#a.h = minus[mul[#(a.c), 2], 2]
The problem is only that in your alkane sig declaration you said c: one Carbon, which is going to fix the number of carbon atoms to exactly 1, so minus[mul[#(a.c), 2], 2] is always going to evaluate to exactly 0. I assume you want to alloy for any number of carbons (since Cn) so you should change it from c: one Carbon to c: set Carbon. If you then run the show_alkane predicate, you should get some instances where the number of carbons is greater than 1 and thus, the number of hydrogens is greater than 0.
Also, for the alkane formula
all a:alkynegrp |
#a.h = plus[mul[#(a.c), 2], 2]
the default scope of 3 will not suffice, because you will need at least 4 atoms of hydrogen when a.c is non-empty, but you can fix that by explicitly giving a scope
run show_alkane for 8
If this wasn't the problem you were talking about, please be more specific about why you think "it doesn't work", i.e., what is it that you expect Alloy to do and what is it that Alloy actually does.
Related
The following model produces instances with exactly 2 address relations when the number of Books is limited to 1, however, if more Books are allowed it will create instances with 0-3 address relations. My misunderstanding of how Alloy works?
sig Name{}
sig Addr{}
sig Book { addr: Name -> lone Addr }
pred show(b:Book) { #b.addr = 2 }
// nr. of address relations in every Book should be 2
run show for 3 but 2 Book
// works as expected with 1 Book
Each instance of show should include one Book, labeled as being the b of show, which has two address pairs. But show does not say that every book must have two address pairs, only that at least one must have two address pairs.
[Postscript]
When you ask Alloy to show you an instance of a predicate, for example by the command run show, then Alloy should show you an instance: that is (quoting section 5.2.1 of Software abstractions, which you already have open) "an assignment of values to the variables of the constraint for which the constraint evaluates to true." In any given universe, there may be many other possible assignments of values to the variables for which the constraint evaluates to false; the existence of such possible non-suitable assignments is unavoidable in any universe with more than one atom.
Informally, we can think of a run command for a predicate P with arguments X, Y, Z as requesting that Alloy show us universes which satisfy the expression
some X, Y, Z : univ | P[X, Y, Z]
The run command does not amount to the expression
all X, Y, Z : univ | P[X, Y, Z]
If you want to see only universes in which every book has two pairs in its addr relation, then say so:
pred all_books_have_2 { all b : Book | #b.addr = 2 }
I think it's better that run have implicit existential quantification, rather than implicit universal quantification. One way to see why is to imagine a model that defines trees, such as:
sig Node { parent : lone Node }
fact parent_acyclic { no n : Node | n in n.^parent }
Suppose we get tired of seeing universes in which every tree is trivial and contains a single node. I'd like to be able to define a predicate that guarantees at least one tree with depth greater than 1, by writing
pred nontrivial[n : Node]{ some n.parent }
Given the constraint that trees be acyclic, there can never be a non-empty universe in which the predicate nontrivial holds for all nodes. So if run and pred had the semantics you have been supposing, we could not use nontrivial to find universes containing non-trivial trees.
I hope this helps.
For an university project I'm trying to write the chinese game of Go (http://en.wikipedia.org/wiki/Go_%28game%29) in Alloy. (i'm using the 4.2 version)
I managed to write the base structure. Go's played on a board 9 x 9 wide, but i'm using a smaller set of 3 x 3 for checking it faster.
The board is made of crosses which can either be empty or occupied by black or white stones.
abstract sig Colour {}
one sig White, Black, Empty extends Colour {}
abstract sig Cross {
Status: one Colour,
near: some Cross,
group: lone Group
}
one sig C11, C12, C13,
C21, C22, C23,
C31, C32, C33 extends Cross {}
sig Group {
stones : some Cross,
freedom : some Cross
}
pred closeStones {
near=
C11->C12 + C11->C21 +
C12->C11 + C12->C13 + C12->C22 +
C13->C12 + C13->C23 +
C21->C22 + C21->C11 + C21->C31 +
C22->C21 + C22->C23 + C22->C12 + C22->C32 +
C23->C22 + C23->C13 + C23->C33 +
C31->C32 + C31->C21 +
C32->C31 + C32->C33 + C32->C22 +
C33->C32 + C33->C23
}
fact stones2 {
all g : Group |
all c : Cross |
(c.group=g) iff c in g.stones
}
fact noGroup{
all c : Cross | (c.Status=Empty) iff c.group=none
}
fact groupNearStones {
all disj c,d : Cross |
((d in c.near) and c.Status=d.Status)
iff
d.group=c.group
}
The problem is: following Go rules, every stones must be considered as part of a group. This group is made of all the adiacent stones with the same colour.
My fact "groupNearStones" should be sufficient to describe that condition, but this way I can't get groups made of more of 3 stones.
I've tried rewriting it in different ways, but either the analizer says it found "0 variables" or it groups up all the stones with the same status, regardless of wheter they're near each other or not.
If you could give me any insight I will be grateful, since i'm breaking my head on this simple matter for days.
Ask yourself two questions.
First: in Go, what constitutes a group? You say yourself: it is a set of adjacent stones with the same color. Not that every stone in the group must be adjacent to every other; it suffices for every stone to be adjacent to another stone in the group.
So from a formal point of view: given a stone S, the set of stones in the group as S is the transitive closure of the stones reachable through the relation same_color_and_adjacent, or S.*same_color_and_adjacent.
Second: what constitutes being the same color and adjacent? I think you can define this easily, with what you have.
On a side issue; you may find it easier to scale the model to arbitrary sizes of boards if you reify the notion of rows and columns.
I hope this helps.
[Addendum:] Apparently it doesn't help enough. I'll try to be a bit more explicit, but I want the full solution to come from you and not from me.
Note that the point of defining a relation like same_color_and_adjacent is not to eliminate the formulation of facts or predicates in your model, but to make them easier to write and to write correctly. It's not magic.
Consider first a reformulation of your fact groupNearStones in terms of a single relation that holds for pairs of stones which are adjacent and have the same color. The relation can be defined by modifying your declaration for Cross:
abstract sig Cross {
Status: one Colour,
near: some Cross,
group: lone Group,
near_and_similar : some Cross
}{
near_and_similar = near & { c : Cross | c.#Status = Status}
}
Now your existing fact can be written as:
fact groupNearStones2 {
all disj c,d : Cross |
d in c.near_and_similar
iff
d.group=c.group
}
Actually, I would write both versions of groupNearStones as predicates, not facts. That would allow you to check that the new formulation is really equivalent to the old one by running a check like:
pred GNS_equal_GNS2 {
groupNearStones iff groupNearStones2
}
(I have not run such a check; I'm being a little lazy.)
Now, let us consider the problems you mention:
You never get groups containing more than three stones. Actually, given the formulation of groupNearStones, I'm surprised you get groups with more than two. Consider what groupNearStones says: any two stones in a group are adjacent and have the same color. Draw a board on a piece of paper and draw a group of five stones. Now ask whether such a group satisfies the fact groupNearStones. Say the group is C11, C12, C13, C21, C22. What does groupNearStones say about the pair C21, C13?
Do you see the problem? Are the relations near and 'close enough to be in the same group' really the same? If they are not the same, are they related?
Hint: think about transitive closure.
You never get groups containing a single stone.
How surprising is this, given that groupNearStones says that c.group = d.group only if c and d are disjoint? If you never get single-stone groups, then every stone that should be a single-stone group is not classed as being in any group at all, since such a stone must not satisfy the expression s.group = s.group.
Do you see the problem?
Hint: think about reflexive transitive closure.
I'm curious as to when evaluation sets in, apparently certain operators are rather transformed into clauses than evaluated:
abstract sig Element {}
one sig A,B,C extend Element {}
one sig Test {
test: set Element
}
pred Test1 { Test.test = A+B }
pred Test2 { Test.test = Element-C }
and run it for Test1 and Test2 respectively will give different number of vars/clauses, specifically:
Test1: 0 vars, 0 primary vars, 0 clauses
Test2: 5 vars, 3 primary vars, 4 clauses
So although Element is abstract and all its members and their cardinalities are known, the difference seems not to be computed in advance, while the sum is. I don't want to make any assumptions, so I'm interested in why that is. Is the + operator special?
To give some context, I tried to limit the domain of a relation and found, that using only + seems to be more efficient, even when the sets are completely known in advance.
To give some context, I tried to limit the domain of a relation and found, that using only + seems to be more efficient, even when the sets are completely known in advance.
That is pretty much the right conclusion. The reason is the fact that the Alloy Analyzer tries to infer relation bounds from certain Alloy idioms. It uses a conservative approximation that is always sound for set union and product, but not for set difference. That's why for Test1 in the example above the Alloy Analyzer infers a fixed bound for the test relation (this/Test.test: [[[A$0], [B$0]]]) so no solver needs to be invoked; for Test2, the bound for the test relation cannot be shrunk so is set to be the most permissive (this/Test.test: [[], [[A$0], [B$0], [C$0]]]), thus a solver needs to be invoked to find a solution satisfying the constraints given the bounds.
In a model I started to sketch in Alloy the other day, I get the following message when I attempt to find an instance of a particular predicate:
Translation capacity exceeded.
In this scope, universe contains 34 atoms
and relations of arity 12 cannot be represented.
Visit http://alloy.mit.edu/ for advice on refactoring.
Any suggestions of where on the site alloy.mit.edu to look? I'm not finding anything with an obvious label like "Refactoring models that exceed translation capacity".
That's the essential question.
[Postscript: the cause of my problem appears to have been a bad initial formulation of the quantified variable declarations I was using in a predicate; the problem went away once I got the syntax of the declarations right. The full details are not instructive enough to be worth keeping on record, so I'm dropping the original description of the specifics. The short version is: to elicit the instantiation of a particular concrete example, I initially wrote a predicate of the form
pred m {
one t1 : table,
r1, r2, r3 : row,
c1, c2 : column,
c11, c21 : headingcell,
c12, c22, c13, c23 : datacell | {
... // description of the example here
}
}
The one scopes all twelve variables, and is [I am told on good authority] translated internally into a set comprehension defined by a relation of arity 12. What I wanted to say was something more like the following, which does not raise the translation-capacity issue:
pred m {
some t1 : table |
some disj r1, r2, r3 : row |
some disj c1, c2 : column |
some disj c11, c21 : headingcell |
some disj c12, c22, c13, c23 : datacell | {
...
}
}
So: one way to fix some models which elicit the translation-capacity error message is to clean up the quantification of variables.
The basic question, however, retains its interest: when a model elicits the translation-capcity error message and the quantifiers are already clean and correct, is there a document to read?]
The kind of refactoring needed in this case is unlikely to be a simple syntactic one. Rather, it here means restructuring the model so that it doesn't use a relation of that high arity. In your example above, I can't really see which relation has arity 12. If you post (or send me) a self-contained model, I can look at it, identify the problematic relation, and maybe even suggest how to avoid it.
Is there a typed programming language where I can constrain types like the following two examples?
A Probability is a floating point number with minimum value 0.0 and maximum value 1.0.
type Probability subtype of float
where
max_value = 0.0
min_value = 1.0
A Discrete Probability Distribution is a map, where: the keys should all be the same type, the values are all Probabilities, and the sum of the values = 1.0.
type DPD<K> subtype of map<K, Probability>
where
sum(values) = 1.0
As far as I understand, this is not possible with Haskell or Agda.
What you want is called refinement types.
It's possible to define Probability in Agda: Prob.agda
The probability mass function type, with sum condition is defined at line 264.
There are languages with more direct refinement types than in Agda, for example ATS
You can do this in Haskell with Liquid Haskell which extends Haskell with refinement types. The predicates are managed by an SMT solver at compile time which means that the proofs are fully automatic but the logic you can use is limited by what the SMT solver handles. (Happily, modern SMT solvers are reasonably versatile!)
One problem is that I don't think Liquid Haskell currently supports floats. If it doesn't though, it should be possible to rectify because there are theories of floating point numbers for SMT solvers. You could also pretend floating point numbers were actually rational (or even use Rational in Haskell!). With this in mind, your first type could look like this:
{p : Float | p >= 0 && p <= 1}
Your second type would be a bit harder to encode, especially because maps are an abstract type that's hard to reason about. If you used a list of pairs instead of a map, you could write a "measure" like this:
measure total :: [(a, Float)] -> Float
total [] = 0
total ((_, p):ps) = p + probDist ps
(You might want to wrap [] in a newtype too.)
Now you can use total in a refinement to constrain a list:
{dist: [(a, Float)] | total dist == 1}
The neat trick with Liquid Haskell is that all the reasoning is automated for you at compile time, in return for using a somewhat constrained logic. (Measures like total are also very constrained in how they can be written—it's a small subset of Haskell with rules like "exactly one case per constructor".) This means that refinement types in this style are less powerful but much easier to use than full-on dependent types, making them more practical.
Perl6 has a notion of "type subsets" which can add arbitrary conditions to create a "sub type."
For your question specifically:
subset Probability of Real where 0 .. 1;
and
role DPD[::T] {
has Map[T, Probability] $.map
where [+](.values) == 1; # calls `.values` on Map
}
(note: in current implementations, the "where" part is checked at run-time, but since "real types" are checked at compile-time (that includes your classes), and since there are pure annotations (is pure) inside the std (which is mostly perl6) (those are also on operators like *, etc), it's only a matter of effort put into it (and it shouldn't be much more).
More generally:
# (%% is the "divisible by", which we can negate, becoming "!%%")
subset Even of Int where * %% 2; # * creates a closure around its expression
subset Odd of Int where -> $n { $n !%% 2 } # using a real "closure" ("pointy block")
Then you can check if a number matches with the Smart Matching operator ~~:
say 4 ~~ Even; # True
say 4 ~~ Odd; # False
say 5 ~~ Odd; # True
And, thanks to multi subs (or multi whatever, really – multi methods or others), we can dispatch based on that:
multi say-parity(Odd $n) { say "Number $n is odd" }
multi say-parity(Even) { say "This number is even" } # we don't name the argument, we just put its type
#Also, the last semicolon in a block is optional
Nimrod is a new language that supports this concept. They are called Subranges. Here is an example. You can learn more about the language here link
type
TSubrange = range[0..5]
For the first part, yes, that would be Pascal, which has integer subranges.
The Whiley language supports something very much like what you are saying. For example:
type natural is (int x) where x >= 0
type probability is (real x) where 0.0 <= x && x <= 1.0
These types can also be implemented as pre-/post-conditions like so:
function abs(int x) => (int r)
ensures r >= 0:
//
if x >= 0:
return x
else:
return -x
The language is very expressive. These invariants and pre-/post-conditions are verified statically using an SMT solver. This handles examples like the above very well, but currently struggles with more complex examples involving arrays and loop invariants.
For anyone interested, I thought I'd add an example of how you might solve this in Nim as of 2019.
The first part of the questions is straightfoward, since in the interval since since this question was asked, Nim has gained the ability to generate subrange types on floats (as well as ordinal and enum types). The code below defines two new float subranges types, Probability and ProbOne.
The second part of the question is more tricky -- defining a type with constrains on a function of it's fields. My proposed solution doesn't directly define such a type but instead uses a macro (makePmf) to tie the creation of a constant Table[T,Probability] object to the ability to create a valid ProbOne object (thus ensuring that the PMF is valid). The makePmf macro is evaluated at compile time, ensuring that you can't create an invalid PMF table.
Note that I'm a relative newcomer to Nim so this may not be the most idiomatic way to write this macro:
import macros, tables
type
Probability = range[0.0 .. 1.0]
ProbOne = range[1.0..1.0]
macro makePmf(name: untyped, tbl: untyped): untyped =
## Construct a Table[T, Probability] ensuring
## Sum(Probabilities) == 1.0
# helper templates
template asTable(tc: untyped): untyped =
tc.toTable
template asProb(f: float): untyped =
Probability(f)
# ensure that passed value is already is already
# a table constructor
tbl.expectKind nnkTableConstr
var
totprob: Probability = 0.0
fval: float
newtbl = newTree(nnkTableConstr)
# create Table[T, Probability]
for child in tbl:
child.expectKind nnkExprColonExpr
child[1].expectKind nnkFloatLit
fval = floatVal(child[1])
totprob += Probability(fval)
newtbl.add(newColonExpr(child[0], getAst(asProb(fval))))
# this serves as the check that probs sum to 1.0
discard ProbOne(totprob)
result = newStmtList(newConstStmt(name, getAst(asTable(newtbl))))
makePmf(uniformpmf, {"A": 0.25, "B": 0.25, "C": 0.25, "D": 0.25})
# this static block will show that the macro was evaluated at compile time
static:
echo uniformpmf
# the following invalid PMF won't compile
# makePmf(invalidpmf, {"A": 0.25, "B": 0.25, "C": 0.25, "D": 0.15})
Note: A cool benefit of using a macro is that nimsuggest (as integrated into VS Code) will even highlight attempts to create an invalid Pmf table.
Modula 3 has subrange types. (Subranges of ordinals.) So for your Example 1, if you're willing to map probability to an integer range of some precision, you could use this:
TYPE PROBABILITY = [0..100]
Add significant digits as necessary.
Ref: More about subrange ordinals here.