Using a subset of all defined relations - alloy

module test
sig Foo {}
sig A {
b: set B,
foo: one Foo
}
sig B {
foo: one Foo
}
assert foo {
all s: (univ - Foo) | all rel: (univ - Foo) -> (univ - Foo) |
s not in s.*rel
}
check foo for 2
Suppose I'm trying to define a relation within an assert that says "apart from Foo and the relations that include Foo, there are no cycles" (which is trivially true by inspection here).
The assert above creates some $foo_rel relations which aren't defined originally in the model. How do I restrict it to only the relations I've specified in my sigs?

The variable $foo_rel and $foo_s are introduced due to skolemization since you're doing a quantification over an arity 2 tuple. (Personally, I've so far ignored the details of skolemization and only hate it when it doesn't work.)
However, I do not think that is your problem. I ran the model without integers:
check foo for 2 but 0 int
This gave the following solution:
---INSTANCE---
integers={}
univ={B$0, Foo$0}
Int={}
seq/Int={}
String={}
none={}
this/Foo={Foo$0}
this/A={}
this/A<:b={}
this/A<:foo={}
this/B={B$0}
this/B<:foo={B$0->Foo$0}
skolem $foo_s={B$0}
skolem $foo_rel={B$0->B$0}
When we go to the evaluator we can do your quantifications step by step:
> univ-Foo
┌──┐
│B⁰│
└──┘
> (univ - Foo) -> (univ - Foo)
┌──┬──┐
│B⁰│B⁰│
└──┴──┘
It is clear that if you quantify over all rel: (univ - Foo) -> (univ - Foo) this will include cycles.
> B⁰.*((univ - Foo) -> (univ - Foo))
┌──┐
│B⁰│
└──┘
I think there is some misunderstanding of how Alloy models work. I hope this helps to explore these models a bit better?

Related

How does Rust compile this example with cyclic trait bounds?

I'm having trouble understanding how the following example, distilled from this code, compiles:
trait A: B {}
trait B {}
impl<T> B for T where T: A {}
struct Foo;
impl A for Foo {}
fn main() {}
My current understanding is that
trait A: B declares a trait A with the supertrait B. The Rust reference on traits states
Supertraits are traits that are required to be implemented for a type to implement a specific trait.
impl<T> B for T where T:A implements B for any type with the trait A
.
I expect impl A for Foo to fail because before A is implemented for Foo, the blanket implementation can't implement B for Foo, which is required.
My most plausible model for what rustc does while compiling the snippet is as follows:
implement A for Foo, deferring the check that Foo implements B to a later stage
implement B for Foo with the blanket implementation, since Foo now implements A
check that Foo implements B as required by the trait bound A: B
Is this in some way close to the truth? Is there any documentation I missed explaining the order in which implementations are processed?
rustc doesn't work "in order". Rather, we first register all impls and then type-check each impl with no particular order. The idea is that we collect a list of obligations (of various kinds - one of them is a trait bound), and then we match them against impls (not just; this is only one way to resolve an obligation, but this is what relevant here). Each obligation can create another, recursive obligations and we elaborate them until there are no more.
The way it currently works is that when we check an impl Trait for Type, we add an obligation Type: Trait. This might seem silly, but we later elaborate it further until all required bounds are met.
So let's say we're currently checking impl<T> B for T where T: A. We add one obligation, T: B, and match it against impl B for T. There is nothing to elaborate further, so we finish successfully.
We then check impl A for Foo, and add an obligation Foo: A. Since the trait A requires Self: B, we add another obligation Foo: B. Then we start matching obligations: the first obligation, Foo: A, is matched by the currently processed impl with no additional obligations. The second obligation, Foo: B, is matched against impl<T> B for T where T: A. This has a new obligate - T: A or Foo: A - so we try to match that. We successfully match that against impl A for Foo, with no additional obligations.
An interesting implication of the above is that if we change the second impl to the following:
impl A for Foo where Foo: B {}
Then this no longer compiles with an "overflow evaluating the requirement Foo: A" error (playground), even though it is essentially the same, because now to prove that Foo: A rustc needs to prove that Foo: B and again that Foo: A, while previously it just registered an obligation for Foo: B and not proved it immediately.
Note: The above is an over-simplification: for example, there is also a cache, and well-formed obligations, and much more. But the general principle is the same.

"Unconstrained generic constant" when adding const generics

How would I add const generics? Lets say I have a type foo:
pub struct foo <const bar: i64> {
value: f64,
}
and I want to implement mul so I can multiply 2 foos together. I want to treat bar as a dimension, so foo<baz>{value: x} * foo<quux>{value: k} == foo<baz + quux>{value: x * k}, as follows:
impl<const baz: i64, const quux: i64> Mul<foo<quux>> for foo<baz> {
type Output = foo<{baz + quux}>;
fn mul(self, rhs: foo<quux>) -> Self::Output {
Self::Output {
value: self.value * rhs.value,
}
}
}
I get an error telling me I need to add a where bound on {baz+quux} within the definition of the output type. What exactly does this mean and how do I implement it? I can't find any seemingly relevant information on where.
The solution
I got a variation on your code to work here:
impl<const baz: i64, const quux: i64> Mul<Foo<quux>> for Foo<baz>
where Foo<{baz + quux}>: Sized {
type Output = Foo<{baz + quux}>;
fn mul(self, rhs: Foo<quux>) -> Self::Output {
Self::Output {
value: self.value * rhs.value,
}
}
}
How I got there
I've reproduced the full error that you get without the added where clause below:
error: unconstrained generic constant
--> src/main.rs:11:5
|
11 | type Output = Foo<{baz + quux}>;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
help: try adding a `where` bound using this expression: `where [u8; {baz + quux}]: Sized`
Now, the clause that it suggests is not very useful, for one reason: the length parameter of a statically sized slice must be a usize, but our values baz and quux (and their sum) are i64. I'd imagine that the compiler authors included that particular suggestion because the primary use case for const generics is embedding array sizes in types. I've opened an issue on GitHub about this diagnostic.
Why is this necessary?
A where clause specifies constraints on some generic code element---a function, type, trait, or in this case, implementation---based on the traits and lifetimes that one or more generic parameters, or a derivative thereof, must satisfy. There are equivalent shorthands for many cases, but the overall requirement is that the constraints are fully specified.
In our case, it may seem superficially that this implementation works for any combination of baz and quux, but this is not the case, due to integer overflow; if we supply sufficiently large values of the same sign for both, their sum cannot be represented by i64. This means that i64 is not closed under addition.
The constraint that we add requires that the sum of the two values is in the set of possible values of an i64, indirectly, by requiring something of the type which consumes it. Hence, supplying 2^31 for both baz and quux is not valid, since the resulting type Foo<{baz + quux}> does not exist, so it cannot possibly implement the Sized trait. While this technically is a stricter constraint than we need (Sized is a stronger requirement than a type simply existing), all Foo<bar> which exist implement Sized, so in our case it is the same. On the other hand, without the constraint, no where clause, explicit or shorthand, specifies this constraint.

Getting to know Rust and the type system

Coming from the ML family and learned to program in SML, a very strict type system are common place.
I'm trying to learn Rust, and since I'm used to strongly typed languages an being a computer science student which are majoring in compiler theory and programming language theory. I thought that I just skipped those boring hello world intro things. I'm trying to get to know the type system with generics and lifetime in my scope at the moment.
For a simple project I'm implementing a single linked list library since I miss it and it will give me a feeling of how the type system works. I'm using reference/borrowing instead of boxing to keep thing on the stack, hence need to explicitly tell the compiler when a list node goes out of scope.
pub enum List<'a, L> {
Cons(L, &'a List<'a, L>),
Nil,
}
impl<'a, L, S> List<'a, L> {
// ^ unconstrained type parameter // error here
pub fn new() -> List<'a,L> {
&List::Nil
}
pub fn fold(f : fn(S, L) -> S, acc : S, lst : List<'a,L>) -> S {
match lst {
List::Cons(item, rest) => fold(f, f(acc,item), rest),
List::Nil => acc,
}
}
}
I get that the type parameter S are unconstraint, why does it need to be, when I clearly don't want it to be constraint at all? Coming from SML and F# where this is no problem, since this can be deduced at compile time.
I know that tail recursion elimination are yet to become a feature of Rust, this is not the subject of this question.
Lifetimes and generics declared on an impl are meant to be used on the corresponding type or trait.
If you want generics on a method, you have to declare them there:
pub fn fold<S>(f : fn(S, L) -> S, acc : S, lst : List<'a,L>) -> S {
// ^^^
match lst {
List::Cons(item, rest) => fold(f, f(acc,item), rest),
List::Nil => acc,
}
}
I believe one reason the compiler wants you to declare generics where there are used rather than a bigger block is that it needs to know their variance, which depends on their usage.

Why does boxing an array of function pointers with `box` syntax only work with a temporary `let` binding?

I have two dummy functions:
fn foo() -> u32 { 3 }
fn bar() -> u32 { 7 }
And I want to create a boxed slice of function pointer: Box<[fn() -> u32]>. I want to do it with the box syntax (I know that it's not necessary for two elements, but my real use case is different).
I tried several things (Playground):
// Version A
let b = box [foo, bar] as Box<[_]>;
// Version B
let tmp = [foo, bar];
let b = box tmp as Box<[_]>;
// Version C
let b = Box::new([foo, bar]) as Box<[_]>;
Version B and C work fine (C won't work for me though, as it uses Box::new), but Version A errors:
error[E0605]: non-primitive cast: `std::boxed::Box<[fn() -> u32; 2]>` as `std::boxed::Box<[fn() -> u32 {foo}]>`
--> src/main.rs:8:13
|
8 | let b = box [foo, bar] as Box<[_]>;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: an `as` expression can only be used to convert between primitive types. Consider using the `From` trait
Apparently, for some reason, in the version A, the compiler isn't able to coerce the function items to function pointers. Why is that? And why does it work with the additional temporary let binding?
This question was inspired by this other question. I wondered why vec![foo, bar] errors, but [foo, bar] works fine. I looked at the definition of vec![] and found this part which confused me.
This looks like an idiosyncracy of the type inference algorithm to me, and there probably is no deeper reason for this except that the current inference algorithm happens to behave like it does. There is no formal specification of when type inference works and when it doesn't. If you encounter a situation that the type inference engine cannot handle, you need to add type annotations, or rewrite the code in a way that the compiler can infer the types correctly, and that is exactly what you need to do here.
Each function in Rust has its own individual function item type, which cannot be directly named by syntax, but is diplayed as e.g. fn() -> u32 {foo} in error messages. There is a special coercion that converts function item types with identical signatures to the corresponding function pointer type if they occur in different arms of a match, in different branches of an if or in different elements of an array. This coercion is different than other coercions, since it does not only occur in explicitly typed context ("coercion sites"), and this special treatment is the likely cause for this idiosyncracy.
The special coercion is triggered by the binding
let tmp = [foo, bar];
so the type of tmp is completely determined as [fn() -> u32; 2]. However, it appears the special coercion is not triggered early enough in the type inference algorithm when writing
let b = box [foo, bar] as Box<[_]>;
The compiler first assumes the item type of an array is the type of its first element, and apparently when trying to determine what _ denotes here, the compiler still hasn't updated this notion – according to the error message, _ is inferred to mean fn() -> u32 {foo} here. Interestingly, the compiler has already correctly inferred the full type of box [foo, bar] when printing the error message, so the behaviour is indeed rather weird. A full explanation can only be given when looking at the compiler sources in detail.
Rust's type solver engine often can't handle situations it should theoretically be able to solve. Niko Matsakis' chalk engine is meant to provide a general solution for all these cases at some point in the future, but I don't know what the status and the timeline of that project is.
[T; N] to [T] is an unsizing coercion.
CoerceUnsized<Pointer<U>> for Pointer<T> where T: Unsize<U> is
implemented for all pointer types (including smart pointers like Box
and Rc). Unsize is only implemented automatically, and enables the
following transformations:
[T; n] => [T]
These coercions only happen at certain coercion sites:
Coercions occur at a coercion site. Any location that is explicitly
typed will cause a coercion to its type. If inference is necessary,
the coercion will not be performed. Exhaustively, the coercion sites
for an expression e to type U are:
let statements, statics, and consts: let x: U = e
Arguments to functions: takes_a_U(e)
Any expression that will be returned: fn foo() -> U { e }
Struct literals: Foo { some_u: e }
Array literals: let x: [U; 10] = [e, ..]
Tuple literals: let x: (U, ..) = (e, ..)
The last expression in a block: let x: U = { ..; e }
Your case B is a let statement, your case C is a function argument. Your case A is not covered.
Going on pure instinct, I'd point out that box is an unstable magic keyword, so it's possible that it's just half-implemented. Maybe it should have coercions applied to the argument but no one has needed it and thus it was never supported.

Confused about the implication operator in assertions

Signature Test has two fields, a and b:
sig Test {
a: lone Value,
b: lone Value
}
sig Value {}
Note that a and b may or may not have a value.
A Test is valid only if it satisfies this codependency: If a has a value, then b must also have a value:
pred valid (t: Test) {
one t.a => one t.b
}
I want to create an assertion and I expect the Alloy Analyzer to find counterexamples. The assertion is this: If t: Test is valid, then t': Test is also valid, where t' is identical to t, except t' does not have a value for b:
assert Valid_After_Removing_b_Value {
all t, t': Test {
t'.a = t.a
no t'.b
valid[t] => valid[t']
}
}
I expect the Alloy Analyzer to generate counterexamples like this: t has a value for a and b. t' has a value for a but not for b. So, t is valid, but t' is not. But the Analyzer gives counterexamples like this: t has a value for a and b and t' has a value for a and b. I don't understand that. If t has a value for a and b, then t is valid. Likewise, if t' has a value for a and b, then t' is valid. How could that be a counterexample?
What is the right way to express the assertion? Again, my goal is to express this: I assert that if t is valid, then a slightly lesser version of t (e.g., b has no value) is also valid. That should generate counterexamples, due to the codependency.
I think your assertion should be:
assert Valid_After_Removing_b_Value {
all t, t': Test {
(t'.a = t.a &&
no t'.b &&
valid[t]) => valid[t']
}
}
In your original assertion, you assert three independent things. For example, every t' with a b would already be a counterexample.

Resources