Disseminating a token in Alloy - alloy

I'm following an example in Daniel Jackson's excellent book (Software Abstractions), specifically the example in which he has a token-ring setup in order to elect a leader.
I'm attempting to extend this example (Ring election) to ensure that the token, instead of being limited to one, is being passed around to all members within the provided time (and each member only being elected once, not multiple times). However (mostly due to my inexperience in Alloy), I'm having issues figuring out the best way. Initially I'd thought that I could play with some of the operators (changing -'s to +'s), but I don't seem to be quite hitting the nail on the head.
Below is the code from the example. I've marked up a few areas with questions...any and all help is appreciated. I'm using Alloy 4.2.
module chapter6/ringElection1 --- the version up to the top of page 181
open util/ordering[Time] as TO
open util/ordering[Process] as PO
sig Time {}
sig Process {
succ: Process,
toSend: Process -> Time,
elected: set Time
}
// ensure processes are in a ring
fact ring {
all p: Process | Process in p.^succ
}
pred init [t: Time] {
all p: Process | p.toSend.t = p
}
//QUESTION: I'd thought that within this predicate and the following fact, that I could
// change the logic from only having one election at a time to all being elected eventually.
// However, I can't seem to get the logic down for this portion.
pred step [t, t': Time, p: Process] {
let from = p.toSend, to = p.succ.toSend |
some id: from.t {
from.t' = from.t - id
to.t' = to.t + (id - p.succ.prevs)
}
}
fact defineElected {
no elected.first
all t: Time-first | elected.t = {p: Process | p in p.toSend.t - p.toSend.(t.prev)}
}
fact traces {
init [first]
all t: Time-last |
let t' = t.next |
all p: Process |
step [t, t', p] or step [t, t', succ.p] or skip [t, t', p]
}
pred skip [t, t': Time, p: Process] {
p.toSend.t = p.toSend.t'
}
pred show { some elected }
run show for 3 Process, 4 Time
// This generates an instance similar to Fig 6.4
//QUESTION: here I'm attempting to assert that ALL Processes have an election,
// however the 'all' keyword has been deprecated. Is there an appropriate command in
// Alloy 4.2 to take the place of this?
assert OnlyOneElected { all elected.Time }
check OnlyOneElected for 10 Process, 20 Time

This network protocol is exactly about how to elect a single process to be the leader, so I don't really understand the meaning of your idea of having "all processes elected eventually".
instead of all elected.Time, you can equivalently write elected.Time = Process (since the type of elected is Process -> Time). This just says that elected.Time (all processes elected at any time step) is exactly the set of all processes, which, obviously, doesn't mean that "only one process is elected", as suggested by the name of your assertion.

Related

tokio::join and different results

First I wrote the first implementation, then tried to simplify it to the second one, but surprisingly the second one is almost 3x slower. Why?
First implementation (faster):
let data: Vec<Arc<Data>> = vec![d1,d2,d3];
let mut handles = Vec::new();
for d in &data {
let d = d.clone();
handles.push(tokio::spawn(async move {
d.update().await;
}));
}
for handle in handles {
let _ = tokio::join!(handle);
}
Second implementation (slower):
let data: Vec<Arc<Data>> = vec![d1,d2,d3];
for d in &data {
let d = d.clone();
let _ = tokio::join!(tokio::spawn(async move {
d.update().await;
}));
}
In the first example you spawn all your tasks onto the executor, allowing them to run in parallel, and then you join all of them in sequence. In the second example you spawn each task onto the executor in sequence, but you wait for that task to finish before spawning the next one, meaning you get zero parallelism, and thus no speedup. Again, the important observation to make is that in the first example all of your tasks are making progress in the background even though you're waiting for them to finish one by one. Also something like join_all would probably be more appropriate for waiting on the tasks in the first example.

When will Go scheduler create a new M and P?

Just learned golang GMP model, now I understand how goroutines, OS threads, and golang contexts/processors cooperate with each other. But I still don't understand when will an M and P be created?
For example, I have a test code to run some operations on DB and there are two test cases (two batches of goroutines):
func Test_GMP(t *testing.T) {
for _ = range []struct {
name string
}{
{"first batch"},
{"second batch"},
} {
goroutineSize := 50
done := make(chan error, goroutineSize)
for i := 0; i < goroutineSize; i++ {
go func() {
// do some databases operations...
// each goroutine should be blocked here for some time...
// propogate the result
done <- nil
}()
}
for i := 0; i < goroutineSize; i++ {
select {
case err := <-done:
assert.NoError(t, err)
case <-time.After(10 * time.Second):
t.Fatal("timeout waiting for txFunc goroutine")
}
}
close(done)
}
}
In my understanding, if M is created in need. In the first batch of goroutines, 8 (the number of virtual cores on my computer) OS threads will be created and the second batch will just reuse the 8 OS threads without creating new ones. Is that correct?
Appreciate if you can provide more materials or blogs on this topic.
M is reusable only if your processes are not blocking or not any sys-calls. In your case you have blocking tasks inside your go func(). So, number of M will not be limited to 8 (the number of virtual cores on my computer). First batch will block and remove from P and wait for blocking processes get finished while new M create an associate with P.
We create a goroutine through Go func ();
There are two queues that store G, one is the local queue of local scheduler P, one is the global G queue. The newly created G will be
saved in the local queue in the P, and if the local queues of P are
full, they will be saved in the global queue;
G can only run in m, one m must hold a P, M and P are 1: 1
relationship. M will pop up a executable G from the local queue of P.
If the local queue is empty, you will think that other MP combinations
steals an executable G to execute;
A process executed by M Scheduling G is a loop mechanism;
When M executes syscall or the remaining blocking operation, M will block, if there are some g in execution, Runtime will remove this
thread M from P, then create one The new operating system thread (if
there is an idle thread available to multiplex idle threads) to serve
this P;
When the M system call ends, this G will try to get an idle P execute and put it into this P's local queue. If you get P, then this
thread m becomes a sleep state, add it to the idle thread, and then
this G will be placed in the global queue.
1. P Quantity:
The environment variable $ GomaxProcs is determined by the Runtime
method gomaxprocs () when the environment variable is scheduled. After
GO1.5, GomaxProcs will be set by default to the available cores, and
before default it is 1.This means that only $ GOMAXPROCS Goroutine is
run at the same time at any time executed.
2. M quantity:
The GO language itself limits: When the GO program starts, the maximum
number of M will set the maximum number of M. However, the kernel is
difficult to support so many threads, so this limit can be ignored.
SetMaxThreads function in runtime / debug, set the maximum number of M
A M blocking, you will create new M.
The number of M and P has no absolute relationship, one m block, p
will create or switch another M, so even if the default number of P is
1, there may be many M out.
Please refer following for more details,
https://www.programmersought.com/article/79557885527/
go-goroutine-os-thread-and-cpu-management

Alloy Analyzer: Quantifiers and Set Products

I have encountered the following issue in Alloy. Consider the toy code which tries to capture even labeled entities (V1 is for State and V2 is for ProductStateSet):
enum State {s1, s2, s3, s4, s5, s6}
enum DummySet {b,c}
let ProductStateSet = DummySet->State
pred evenV1 (state: State){
(state = s2) or (state=s4) or (state=s6)
}
pred evensetV1 (stateset: State) {
all state: stateset | evenV1[state]
}
assert a2V1 {
evensetV1[(s2 + s4)]
}
pred evenV2 (state: ProductStateSet){
(state = b->s2) or (state=b->s4) or (state=b->s6)
}
assert a1V2 {
evenV2[b->s2]
}
pred evensetV2 (stateset: ProductStateSet) {
all state: stateset | evenV2[state]
}
assert a2V2 {
evensetV2[ (b->s2) + (b->s4) ]
}
The assertion a2V1 is true, but a2V2 is false, when I would have expected them to be the same. Why is this so, and what is the proper way to use quantifiers when dealing with set products?
If I change "evenset" to have "some" rather than "all", no issues with evensetV1, but for evensetV2 I get:
pred evensetV2 (stateset: ProductStateSet) {
some state: stateset | evenV2[state]
}
assert a2V2 {
evensetV2[ (b->s2) + (b->s4) ]
}
Executing "Check a2V2"
Solver=sat4j Bitwidth=4 MaxSeq=4 SkolemDepth=1 Symmetry=20
Generating CNF...
.
Analysis cannot be performed since it requires higher-order
quantification that could not be skolemized.
Another question for this example regarding set comprehension: I can write an assertion like:
assert a3V1{
#{state: State | evenV1[state]} > 2
}
Is there a way to print out the set elements, that is, can I print out the below set?
{state: State | evenV1[state]}
Thanks!
Regarding your first question:
EDITED (first answer was wrong)
Running the assertion gives 2 counter examples.
One is expected, in the sense that none -> none is not considered in the evenV2 predicate. But the other (see below) doesn't make sense for me.
My only logical explanation is that the quantification variable "state" of evenSetV2 is badly interpreted when given in parameter of evenV2 even though it seems somehow farfetched ...
I think whenever possible, one should avoid quantification and prefer more straightforward set operations. Implementing the predicate and assertion as follows solves the problem (you don't even need to differentiate between the singleton and the set approach anymore):
pred evenV2 ( state: ProductStateSet){
DummySet.state in (s2+s4+s6)
}
assert a2V2 {
evenV2[ (b->s4)+ (b->s6)]
}
END EDIT
For the second question, the Alloy evaluator is your friend.

How are inital states established in dynamic models under Electrum 2?

I cribbed from the hotel door lock example and came up with this MWE for vehicle doors.
enum LockState {Locked, Unlocked}
sig Door {
var state: LockState
}
sig Vehicle {
doors : disj set Door
}
//actions
pred unlock[d: Door]{
d.state' = Unlocked
}
pred lock[d: Door]{
d.state' = Locked
}
//traces
pred init{
all s: Door.state | s = Locked
}
pred trace{
init
always {
some d: Door |
unlock[d] or
lock[d]
}
}
//demonstrate
run {} for 4 but exactly 2 Vehicle, 4 Time
Which to my suprise allows the instance shown below, in which some doors are locked and some not. How do I establish the condition that all doors are locked at the earliest time?
Initial states are defined without any temporal keywords, as you did in init.
The problem is that you defined your trace as a predicate. If you define it as a fact it will always be applied. However, if you make it a predicate (my preference since it feels less global) you must include it from the run command. Pick one:
run trace for 4 but exactly 2 Vehicle, 4 Time
run { trace } for 4 but exactly 2 Vehicle, 4 Time
However, your model will then still not run well.
You provide an always but no goal. So after one state Alloy is happy. You should provide an eventually so Alloy will attempt to continue until it is satisfied.
You allow vehicles without doors, I would use some Door instead of set Door
Your init can be done cleaner like Door.state = Locked
In your trace, each step sets one Door. However, you're not specifying what the state of the other doors should be. If you do not specify a value for the next state, they can become anything. These should be explicitly set to have their old value.
So I came up with the following model:
enum LockState { Locked, Unlocked }
sig Door { var state: LockState }
sig Vehicle { doors : disj some Door }
pred Door.unlock { this.state' = Unlocked }
pred Door.lock { this.state' = Locked }
pred trace {
Door.state = Locked
always (
some d: Vehicle.doors {
(d.unlock or d.lock)
unchanged[state,d]
}
)
eventually Door.state = Unlocked
}
run trace for 4 but exactly 2 Vehicle
pred unchanged[ r : univ->univ, x : set univ ] {
(r - x->univ)' = (r - x->univ)
}
updated Added an unchanged predicate.

Lock-free programming: reordering and memory order semantics

I am trying to find my feet in lock-free programming. Having read different explanations for memory ordering semantics, I would like to clear up what possible reordering may happen. As far as I understood, instructions may be reordered by the compiler (due to optimization when the program is compiled) and CPU (at runtime?).
For the relaxed semantics cpp reference provides the following example:
// Thread 1:
r1 = y.load(memory_order_relaxed); // A
x.store(r1, memory_order_relaxed); // B
// Thread 2:
r2 = x.load(memory_order_relaxed); // C
y.store(42, memory_order_relaxed); // D
It is said that with x and y initially zero the code is allowed to produce r1 == r2 == 42 because, although A is sequenced-before B within thread 1 and C is sequenced before D within thread 2, nothing prevents D from appearing before A in the modification order of y, and B from appearing before C in the modification order of x. How could that happen? Does it imply that C and D get reordered, so the execution order would be DABC? Is it allowed to reorder A and B?
For the acquire-release semantics there is the following sample code:
std::atomic<std::string*> ptr;
int data;
void producer()
{
std::string* p = new std::string("Hello");
data = 42;
ptr.store(p, std::memory_order_release);
}
void consumer()
{
std::string* p2;
while (!(p2 = ptr.load(std::memory_order_acquire)))
;
assert(*p2 == "Hello"); // never fires
assert(data == 42); // never fires
}
I'm wondering what if we used relaxed memory order instead of acquire? I guess, the value of data could be read before p2 = ptr.load(std::memory_order_relaxed), but what about p2?
Finally, why it is fine to use relaxed memory order in this case?
template<typename T>
class stack
{
std::atomic<node<T>*> head;
public:
void push(const T& data)
{
node<T>* new_node = new node<T>(data);
// put the current value of head into new_node->next
new_node->next = head.load(std::memory_order_relaxed);
// now make new_node the new head, but if the head
// is no longer what's stored in new_node->next
// (some other thread must have inserted a node just now)
// then put that new head into new_node->next and try again
while(!head.compare_exchange_weak(new_node->next, new_node,
std::memory_order_release,
std::memory_order_relaxed))
; // the body of the loop is empty
}
};
I mean both head.load(std::memory_order_relaxed) and head.compare_exchange_weak(new_node->next, new_node, std::memory_order_release, std::memory_order_relaxed).
To summarize all the above, my question is essentially when do I have to care about potential reordering and when I don't?
For #1, compiler may issue the store to y before the load from x (there are no dependencies), and even if it doesn't, the load from x can be delayed at cpu/memory level.
For #2, p2 would be nonzero, but neither *p2 nor data would necessarily have a meaningful value.
For #3 there is only one act of publishing non-atomic stores made by this thread, and it is a release
You should always care about reordering, or, better, not assume any order: neither C++ nor hardware executes code top to bottom, they only respect dependencies.

Resources