How to express Nothing else in the set? - alloy

[Update] I spent a lot of time studying #Hovercouch's fantastic solution (see his solution below). I took his solution, along with Peter Krien's insights and wrote up a summary: 3 ways to model the set of non-negative even numbers. I welcome your comments.
I am trying to create an Alloy model that defines a set of integers. I want to constrain the set to the integers 0, 2, 4, ...
I want to use a "generative" approach to defining the set:
0 is in the set.
If i is in the set, then i+2 is in the set.
Nothing else is in the set.
I am struggling with the last one - nothing else is in the set. How do I express that?
Below is the Alloy model that I created.
one sig PositiveEven {
elements: set Int
}
pred generate_set_members {
0 in PositiveEven.elements
all i: Int | i in PositiveEven.elements => i.plus[2] in PositiveEven.elements
// Nothing else is in the set - How to express this?
}

The simplest way to do this would be to create a relationship that maps each number N to N+2, and then take the reflexive-transitive closure of that relationship over 0.
one sig PositiveEven {
elements: set Int
}
one sig Generator {
rel: Int -> Int
} {
all i: Int | i.rel = i.next.next
}
pred generate_set_members {
PositiveEven.elements = 0.*(Generator.rel)
}
assert only_positive_elements {
generate_set_members =>
all i: Int | i in PositiveEven.elements <=> i >= 0 and i.rem[2] = 0
}
Note that you cannot use i.plus[2] instead of i.next.next, because Alloy integers overflow to negative.

What do you think of this:
let iset[min,max,step] = { i : Int |
i>= min
and i<max
and i.minus[min].div[step].mul[step]
= i.minus[min] }
pred show[ s : set Int ] {
iset[ 0, 10, 2 ] = s
}
run show for 0 but 8 int
The visualiser does not show the Int types so look in the Tree or Text view.

Related

How to sum more than two numbers in Alloy Analyzer?

I am trying to sum all the numbers in a set in Alloy.
For instance, in the signature abc, I want the value to be the sum of a.value + b.value + c.value, which is 4+1+3=8.
However, if I use "+", it gives me the union set and not the sum.
PS. I know there is the "plus" (as I used it in sig sumab), but this only allows me to sum two values.
Thanks
open util/integer
sig a{value: Int}
{
value = 4
}
sig b{value: Int}
{
value = 1
}
sig c{value: Int}
{
value = 3
}
sig abc{value: set Int}
{
value = a.value + b.value + c.value
}
sig sumab{
value : Int
}
{
value = plus[a.value, b.value]
}
pred add{}
run add for 4 int, exactly 1 sumab, exactly 1 a, exactly 1 b, exactly 1 c, exactly 1 abc
Note: I wrote this in pseudo-code, it may help to get to an answer:
fun plusN [setInt : set de Int] : Int { // function "plusN" should take a set of integers "setInt", return an integer
if #setInt = 2 //if only two numbers in set, sum them
then plus[max setInt , min setInt]
else // if more than 2, use recursion
plusN [max setInt , plusN[{setInt - max setInt}]]
}
Note 2. The function sum may seem to be a good idea, but if I sum 1+1+1=1, the result will be 1 intead of 3 as the only number in the set is 1.
Alloy comes with a built-in sum function, so just do sum[value].
As mentioned in this answer, Alloy has a sum keyword (different than the sum function) which can be used.
The general format is:
sum e: <set> | <expression involving e>
Variable sig values
Here's a simple example that sums the number of sales during a day for a restaurant's three meals. In particular, note the use of the sum keyword in fun sum_sales.
one sig Restaurant { total_sales: Int }
abstract sig Meal { sales: Int }
one sig Breakfast, Lunch, Dinner in Meal {}
// Each meal only falls in one category
fact { disj[Breakfast, Lunch, Dinner] }
// Every meal is in some category
fact { Breakfast + Lunch + Dinner = Meal }
// Keep the numbers small because the max alloy int is 7
fact { all m: Meal | m.sales <= 4 }
// Negative sales don't make sense
fact { all m: Meal | m.sales >= 0 }
fun sum_sales: Int { sum m: Meal | m.sales }
fact { Restaurant.total_sales = sum_sales }
This works even when all meals have the same number of sales (1 + 1 + 1 = 3), as shown in this check.
check { (all m: Meal | m.sales = 1) => Restaurant.total_sales = 3 }
Here are some other ways to play around with the example.
check {
{
// One meal with three sales
#{ m: Meal | m.sales = 3 } = 1
// Two meals with one sale
#{ m: Meal | m.sales = 1 } = 2
} => Restaurant.total_sales = 5
}
run { Restaurant.total_sales = 5 }
Constant sig values
Another way you might want to use this is to have the value associated with each type of Meal be constant, but allow the number of Meals to vary. You can model this with a relation mapping each meal type to a number of sales as follows.
one sig Restaurant { total_sales: Int }
abstract sig Meal {}
lone sig Breakfast, Lunch, Dinner in Meal {}
// Each meal only falls in one category
fact { disj[Breakfast, Lunch, Dinner] }
// Every meal is in some category
fact { Breakfast + Lunch + Dinner = Meal }
fun meal_sales: Meal -> Int {
Breakfast -> 2
+ Lunch -> 2
+ Dinner -> 3
}
fun sum_sales: Int { sum m: Meal | m.meal_sales }
fact { Restaurant.total_sales = sum_sales }
check { #Meal = 3 => Restaurant.total_sales = 6 }
run { Restaurant.total_sales = 5 }

Return two numbers in Q Sharp (Q#) (Quantum Development Kit)

So, basically, I did the tutorial to create a random number on the website of Microsoft Azure and now I am trying to add some functionalities, including their suggestion add a minimum number.
The initial code to generate just one number, max, is:
operation SampleRandomNumberInRange(max : Int) : Int {
// mutable means variables that can change during computation
mutable output = 0;
// repeat loop to generate random numbers until it generates one that is less or equal to max
repeat {
mutable bits = new Result[0];
for idxBit in 1..BitSizeI(max) {
set bits += [GenerateRandomBit()];
}
// ResultArrayAsInt is from Microsoft.Quantum.Convert library, converts string to positive integer
set output = ResultArrayAsInt(bits);
} until (output <= max);
return output;
}
#EntryPoint()
operation SampleRandomNumber() : Int {
// let declares var which don't change during computation
let max = 50;
Message($"Sampling a random number between 0 and {max}: ");
return SampleRandomNumberInRange(max);
}
Everything works well. Now, I want to generate two numbers so I would like to create a function TwoSampleRandomNumbersInRange but I can't figure out how to make the function return a result such as "Int, Int", I tried a few things including the follow:
operation TwoSampleRandomNumbersInRange(min: Int, max : Int) : Int {
// mutable means variables that can change during computation
mutable output = 0;
// repeat loop to generate random numbers until it generates one that is less or equal to max
repeat {
mutable bits = new Result[0];
for idxBit in 1..BitSizeI(max) {
set bits += [GenerateRandomBit()];
}
for idxBit in 1..BitSizeI(min) {
set bits += [GenerateRandomBit()];
}
// ResultArrayAsInt is from Microsoft.Quantum.Convert library, converts string to positive integer
set output = ResultArrayAsInt(bits);
} until (output >= min and output <= max);
return output;
}
To generate two numbers, I tried this:
operation TwoSampleRandomNumbersInRange(min: Int, max : Int) : Int, Int {
//code here
}
...but the syntax for the output isn't right.
I also need the output:
set output = ResultArrayAsInt(bits);
to have two numbers but ResultArrayAsInt, as the name says, just returns an Int. I need to return two integers.
Any help appreciated, thanks!
The return of an operation has to be a data type, in this case to represent a pair of integers you need a tuple of integers: (Int, Int).
So the signature of your operation and the return statement will be
operation TwoSampleRandomNumbersInRange(min: Int, max : Int) : (Int, Int) {
// code here
return (integer1, integer2);
}
I found the answer to my own question, all I had to do was:
operation SampleRandomNumberInRange(min: Int, max : Int) : Int {
// mutable means variables that can change during computation
mutable output = 0;
// repeat loop to generate random numbers until it generates one that is less or equal to max
repeat {
mutable bits = new Result[0];
for idxBit in 1..BitSizeI(max) {
set bits += [GenerateRandomBit()];
}
// ResultArrayAsInt is from Microsoft.Quantum.Convert library, converts string to positive integer
set output = ResultArrayAsInt(bits);
} until (output >= min and output <= max);
return output;
}
#EntryPoint()
operation SampleRandomNumber() : Int {
// let declares var which don't change during computation
let max = 50;
let min = 10;
Message($"Sampling a random number between {min} and {max}: ");
return SampleRandomNumberInRange(min, max);
}
}

Alloy Analyzer element comparision from set

Some background: my project is to make a compiler that compiles from a c-like language to Alloy. The input language, that has c-like syntax, must support contracts. For now, I am trying to implement if statements that support pre and post condition statements, similar to the following:
int x=2
if_preCondition(x>0)
if(x == 2){
x = x + 1
}
if_postCondtion(x>0)
The problem is that I am a bit confused with the results of Alloy.
sig Number{
arg1: Int,
}
fun addOneConditional (x : Int) : Number{
{ v : Number |
v.arg1 = ((x = 2 ) => x.add[1] else x)
}
}
assert conditionalSome {
all n: Number| (n.arg1 = 2 ) => (some field: addOneConditional[n.arg1] | { field.arg1 = n.arg1.add[1] })
}
assert conditionalAll {
all n: Number| (n.arg1 = 2 ) => (all field: addOneConditional[n.arg1] | { field.arg1 = n.arg1.add[1] })
}
check conditionalSome
check conditionalAll
In the above example, conditionalAll does not generate any Counterexample. However, conditionalSomegenerates Counterexamples. If I understand all and some quantifiers correctly then there is a mistake. Because from mathematical logic we have Ɐx expr(x) => ∃x expr(x) ( i.e. If expression expr(x) is true for all values of x then there exist a single x for which expr(x) is true)
The first thing is that you need to model your pre-, post- and operations. Functions are terrible for that because they cannot not return something that indicates failure. You, therefore, need to model the operation as a predicate. The value of a predicate indicates if the pre/post are satisfied, the arguments and return values can be modeled as parameters.
This is as far as I can understand your operation:
pred add[ x : Int, x' : Int ] {
x > 0 -- pre condition
x = 2 =>
x'=x.plus[1]
else
x'=x
x' > 0 -- post condition
}
Alloy has no writable variables (Electrum has) so you need to model the before and after state with a prime (') character.
We can now use this predicate to calculate the set of solutions to your problem:
fun solutions : set Int {
{ x' : Int | some x : Int | add[ x,x' ] }
}
We create a set with integers for which we have a result. The prime character is nothing special in Alloy, it is only a convention for the post-state. I am abusing it slightly here.
This is more than enough Alloy source to make mistakes so let's test this.
run { some solutions }
If you run this then you'll see in the Txt view:
skolem $solutions={1, 3, 4, 5, 6, 7}
This is as expected. The add operation only works for positive numbers. Check. If the input is 2, the result is 3. Ergo, 2 can never be a solution. Check.
I admit, I am slight confused by what you're doing in your asserts. I've tried to replicate them faithfully, although I've removed unnecessary things, at least I think we're unnecessary. First your some case. Your code was doing an all but then selecting on 2. So removed the outer quantification and hardcoded 2.
check somen {
some x' : solutions | 2.plus[1] = x'
}
This indeed does not give us any counterexample. Since solutions was {1, 3, 4, 5, 6, 7}, 2+1=3 is in the set, i.e. the some condition is satisfied.
check alln {
all x' : solutions | 2.plus[1] = x'
}
However, not all solutions have 3 as the answer. If you check this, I get the following counter-example:
skolem $alln_x'={7}
skolem $solutions={1, 3, 4, 5, 6, 7}
Conclusion. Daniel Jackson advises not to learn Alloy with Ints. Looking at your Number class you took him literally: you still base your problem on Ints. What he meant is not use Int, don't hide them under the carpet in a field. I understand where Daniel is coming from but Ints are very attractive since we're so familiar with them. However, if you use Ints, let them at least use their full glory and don't hide them.
Hope this helps.
And the whole model:
pred add[ x : Int, x' : Int ] {
x > 0 -- pre condition
x = 2 =>
x'=x.plus[1]
else
x'=x
x' > 0 -- post condition
}
fun solutions : set Int { { x' : Int | some x : Int | add[ x,x' ] } }
run { some solutions }
check somen { some x' : solutions | x' = 3 }
check alln { all x' : solutions | x' = 3 }

Lock Challenge in Alloy

I would like to solve the following lock challenge using Alloy.
My main issue is how to model the integers representing the digit keys.
I created a quick draft:
sig Digit, Position{}
sig Lock {
d: Digit one -> lone Position
}
run {} for exactly 1 Lock, exactly 3 Position, 10 Digit
In this context, could you please:
tell me if Alloy seems to you suitable to solve this kind of problem?
give me some pointers regarding the way I could model the key digits (without using Ints)?
Thank you.
My frame of this puzzle is:
enum Digit { N0,N1,N2,N3,N4,N5,N6,N7,N8,N9 }
one sig Code {a,b,c:Digit}
pred hint(h1,h2,h3:Digit, matched,wellPlaced:Int) {
matched = #(XXXX) // fix XXXX
wellPlaced = #(XXXX) // fix XXXX
}
fact {
hint[N6,N8,N2, 1,1]
hint[N6,N1,N4, 1,0]
hint[N2,N0,N6, 2,0]
hint[N7,N3,N8, 0,0]
hint[N7,N8,N0, 1,0]
}
run {}
A simple way to get started, you do not always need sig's. The solution found is probably not the intended solution but that is because the requirements are ambiguous, took a shortcut.
pred lock[ a,b,c : Int ] {
a=6 || b=8 || c= 2
a in 1+4 || b in 6+4 || c in 6+1
a in 0+6 || b in 2+6 || c in 2+0
a != 7 && b != 3 && c != 8
a = 7 || b=8 || c=0
}
run lock for 6 int
Look in the Text view for the answer.
upate we had a discussion on the Alloy list and I'd like to amend my solution with a more readable version:
let sq[a,b,c] = 0->a + 1->b + 2->c
let digit = { n : Int | n>=0 and n <10 }
fun correct[ lck : seq digit, a, b, c : digit ] : Int { # (Int.lck & (a+b+c)) }
fun wellPlaced[ lck : seq digit, a, b, c : digit ] : Int { # (lck & sq[a,b,c]) }
pred lock[ a, b, c : digit ] {
let lck = sq[a,b,c] {
1 = correct[ lck, 6,8,2] and 1 = wellPlaced[ lck, 6,8,2]
1 = correct[ lck, 6,1,4] and 0 = wellPlaced[ lck, 6,1,4]
2 = correct[ lck, 2,0,6] and 0 = wellPlaced[ lck, 2,0,6]
0 = correct[ lck, 7,3,8]
1 = correct[ lck, 7,8,0] and 0 = wellPlaced[ lck, 7,8,0]
}
}
run lock for 6 Int
When you think solve complete, let's examine whether the solution is generic.
Here is another lock.
If you can’t solve this in same form, your solution may not enough.
Hint1: (1,2,3) - Nothing is correct.
Hint2: (4,5,6) - Nothing is correct.
Hint3: (7,8,9) - One number is correct but wrong placed.
Hint4: (9,0,0) - All numbers are correct, with one well placed.
Yes, I think Alloy is suitable for this kind of problem.
Regarding digits, you don't need integers at all: in fact, it is a bit irrelevant for this particular purpose if they are digits or any set of 10 different identifiers (no arithmetic is performed with them). You can use singleton signatures to declare the digits, all extending signature Digit, which should be marked as abstract. Something like:
abstract sig Digit {}
one sig Zero, One, ..., Nine extends Digit {}
A similar strategy can be used to declare the three different positions of the lock. And btw since you have exactly one lock you can also declare Lock as singleton signature.
I like the Nomura solution on this page. I made a slight modification of the predicate and the fact to solve.
enum Digit { N0,N1,N2,N3,N4,N5,N6,N7,N8,N9 }
one sig Code {a,b,c: Digit}
pred hint(code: Code, d1,d2,d3: Digit, correct, wellPlaced:Int) {
correct = #((code.a + code.b + code.c)&(d1 + d2 + d3))
wellPlaced = #((0->code.a + 1->code.b + 2->code.c)&(0->d1 + 1->d2 + 2->d3))
}
fact {
some code: Code |
hint[code, N6,N8,N2, 1,1] and
hint[code, N6,N1,N4, 1,0] and
hint[code, N2,N0,N6, 2,0] and
hint[code, N7,N3,N8, 0,0] and
hint[code, N7,N8,N0, 1,0]
}
run {}
Update (2020-12-29):
The new puzzle presented by Nomura (https://stackoverflow.com/a/61022419/5005552) demonstrates a weakness in the original solution: it does not account for multiple uses of a digit within a code. A modification to the expression for "correct" fixes this. Intersect each guessed digit with the union of the digits from the passed code and sum them for the true cardinality. I encapsulated the matching in a function, which will return 0 or 1 for each digit.
enum Digit {N0,N1,N2,N3,N4,N5,N6,N7,N8,N9}
let sequence[a,b,c] = 0->a + 1->b + 2->c
one sig Code {c1, c2, c3: Digit}
fun match[code: Code, d: Digit]: Int { #((code.c1 + code.c2 + code.c3) & d) }
pred hint(code: Code, d1,d2,d3: Digit, correct, wellPlaced:Int) {
// The intersection of each guessed digit with the code (unordered) tells us
// whether any of the digits match each other and how many
correct = match[code,d1].plus[match[code,d2]].plus[match[code,d3]]
// The intersection of the sequences of digits (ordered) tells us whether
// any of the digits are correct AND in the right place in the sequence
wellPlaced = #(sequence[code.c1,code.c2,code.c3] & sequence[d1, d2, d3])
}
pred originalLock {
some code: Code |
hint[code, N6,N8,N2, 1,1] and
hint[code, N6,N1,N4, 1,0] and
hint[code, N2,N0,N6, 2,0] and
hint[code, N7,N3,N8, 0,0] and
hint[code, N7,N8,N0, 1,0]
}
pred newLock {
some code: Code |
hint[code, N1,N2,N3, 0,0] and
hint[code, N4,N5,N6, 0,0] and
hint[code, N7,N8,N9, 1,0] and
hint[code, N9,N0,N0, 3,1]
}
run originalLock
run newLock
run test {some code: Code | hint[code, N9,N0,N0, 3,1]}

How can I get values of enum variable?

My question is how can I get values of enum variable?
Please look at the attached screenshot... "hatas" is a flag-enum. And I want to
get "HasError" - "NameOrDisplayNameTooShort" errors to show them.
using System;
namespace CampaignManager.Enums
{
[Flags]
public enum CampaignCreaterUpdaterErrorMessage
{
NoError = 0,
HasError = 1,
NameOrDisplaynameTooShort = 2,
InvalidFirstName = 3,
}
}
I tried simply;
Messagebox.Show(hatas); // it's showing InvalidFirstName somehow...
Thank you very much for any help...
First thing: If you want to use the FlagsAttribute on your enum you need to define the values in powers of two like this:
[Flags]
public enum CampaignCreaterUpdaterErrorMessage
{
NoError = 0,
HasError = 1,
NameOrDisplaynameTooShort = 2,
InvalidFirstName = 4,
}
To get parts of a flagged enum, try something like:
var hatas = CampaignCreaterUpdaterErrorMessage.HasError | CampaignCreaterUpdaterErrorMessage.NameOrDisplaynameTooShort;
var x = (int)hatas;
for (int i=0; i<Enum.GetNames(typeof(CampaignCreaterUpdaterErrorMessage)).Length; i++)
{
int z = 1 << i; // create bit mask
if ((x & z) == z) // test mask against flags enum
{
Console.WriteLine(((CampaignCreaterUpdaterErrorMessage)z).ToString());
}
}
For getting the underlying value try casting:
Messagebox.Show(((int)hatas)ToString());
In your example, ToString is getting called by default against the CampaignCreaterUpdaterErrorMessage enum which return the string representation of the enum.
By casting to an int, the underlying default type for enums, you get ToString on the integer value.
You need to cast/unbox the enum into an int as follows.
(int)CampaignCreaterUpdaterErrorMessage.NoError
(int)CampaignCreaterUpdaterErrorMessage.HasError
Try this:
Messagebox.Show(CampaignCreaterUpdaterErrorMessage.NameOrDisplaynameTooShort);

Resources