What is the difference between each value of task_struct?
Here's my notes on the priority fields. I also include how data is presented via the 'ps' command (which gets it's data from /proc/pid/stat, among other things).
task_struct.prio:
0-99 -> Realtime
100-140 -> Normal priority
ps/stat "prio" field:
task_struct.prio - MAX_RT_PRIO (100)
(-100)-(-1) -> Realtime
0-40 -> Normal Priority
stat "rt_priority" field:
0 -> normal
1-99 -> realtime
stat "policy" field:
0 -> SCHED_OTHER (normal)
1 -> SCHED_FIFO
2 -> SCHED_RR (realtime)
Related
Just learned golang GMP model, now I understand how goroutines, OS threads, and golang contexts/processors cooperate with each other. But I still don't understand when will an M and P be created?
For example, I have a test code to run some operations on DB and there are two test cases (two batches of goroutines):
func Test_GMP(t *testing.T) {
for _ = range []struct {
name string
}{
{"first batch"},
{"second batch"},
} {
goroutineSize := 50
done := make(chan error, goroutineSize)
for i := 0; i < goroutineSize; i++ {
go func() {
// do some databases operations...
// each goroutine should be blocked here for some time...
// propogate the result
done <- nil
}()
}
for i := 0; i < goroutineSize; i++ {
select {
case err := <-done:
assert.NoError(t, err)
case <-time.After(10 * time.Second):
t.Fatal("timeout waiting for txFunc goroutine")
}
}
close(done)
}
}
In my understanding, if M is created in need. In the first batch of goroutines, 8 (the number of virtual cores on my computer) OS threads will be created and the second batch will just reuse the 8 OS threads without creating new ones. Is that correct?
Appreciate if you can provide more materials or blogs on this topic.
M is reusable only if your processes are not blocking or not any sys-calls. In your case you have blocking tasks inside your go func(). So, number of M will not be limited to 8 (the number of virtual cores on my computer). First batch will block and remove from P and wait for blocking processes get finished while new M create an associate with P.
We create a goroutine through Go func ();
There are two queues that store G, one is the local queue of local scheduler P, one is the global G queue. The newly created G will be
saved in the local queue in the P, and if the local queues of P are
full, they will be saved in the global queue;
G can only run in m, one m must hold a P, M and P are 1: 1
relationship. M will pop up a executable G from the local queue of P.
If the local queue is empty, you will think that other MP combinations
steals an executable G to execute;
A process executed by M Scheduling G is a loop mechanism;
When M executes syscall or the remaining blocking operation, M will block, if there are some g in execution, Runtime will remove this
thread M from P, then create one The new operating system thread (if
there is an idle thread available to multiplex idle threads) to serve
this P;
When the M system call ends, this G will try to get an idle P execute and put it into this P's local queue. If you get P, then this
thread m becomes a sleep state, add it to the idle thread, and then
this G will be placed in the global queue.
1. P Quantity:
The environment variable $ GomaxProcs is determined by the Runtime
method gomaxprocs () when the environment variable is scheduled. After
GO1.5, GomaxProcs will be set by default to the available cores, and
before default it is 1.This means that only $ GOMAXPROCS Goroutine is
run at the same time at any time executed.
2. M quantity:
The GO language itself limits: When the GO program starts, the maximum
number of M will set the maximum number of M. However, the kernel is
difficult to support so many threads, so this limit can be ignored.
SetMaxThreads function in runtime / debug, set the maximum number of M
A M blocking, you will create new M.
The number of M and P has no absolute relationship, one m block, p
will create or switch another M, so even if the default number of P is
1, there may be many M out.
Please refer following for more details,
https://www.programmersought.com/article/79557885527/
go-goroutine-os-thread-and-cpu-management
I'm investigating a webapp which ran up to 10gb of memory, by analysing a memory dump using Windbg.
Here's the bottom of the !dumpheap -stat output:
00007ff9545df5d0 166523 13321840 System.Runtime.Caching.MemoryCache
00007ff9545df4a0 166523 14654024 System.Runtime.Caching.CacheMemoryMonitor
00007ff9545de990 166523 14654024 System.Runtime.Caching.SRef[]
00007ff9545dcef0 166523 14654024 System.Runtime.Caching.GCHandleRef`1[[System.Runtime.Caching.MemoryCacheStore, System.Runtime.Caching]][]
00007ff9545dfb28 166523 19982760 System.Runtime.Caching.MemoryCacheStatistics
00007ff956778510 333059 21315680 System.Int64[]
00007ff95679c988 41597 31250111 System.Byte[]
00007ff9545e08c8 1332184 31972416 System.Runtime.Caching.MemoryCacheEqualityComparer
00007ff9545dfe48 1332184 31972416 System.Runtime.Caching.SRef
00007ff956780ff0 1332200 31972800 System.SizedReference
00007ff956724620 1498777 35970648 System.Threading.TimerHolder
00007ff95677fb30 1536170 36868080 System.Runtime.Remoting.Messaging.CallContextSecurityData
00007ff956796f28 1606960 38567040 System.Object
00007ff9545df810 1332184 42629888 System.Runtime.Caching.GCHandleRef`1[[System.Runtime.Caching.MemoryCacheStore, System.Runtime.Caching]]
00007ff9545dda38 1332184 42629888 System.Runtime.Caching.UsageBucket[]
00007ff9567ae930 1332268 42632576 Microsoft.Win32.SafeHandles.SafeWaitHandle
00007ff9545df968 1498707 47958624 System.Runtime.Caching.GCHandleRef`1[[System.Threading.Timer, mscorlib]]
00007ff9567adbf8 1498777 47960864 System.Threading.Timer
00007ff9545dff50 1332184 53287360 System.Runtime.Caching.CacheUsage
00007ff94986ead8 1536137 61445480 System.Web.Hosting.AspNetHostExecutionContextManager+AspNetHostExecutionContext
00007ff9567a2838 1332210 63946080 System.Threading.ManualResetEvent
00007ff956796948 293525 66384986 System.String
00007ff9545dfef0 1332184 74602304 System.Runtime.Caching.CacheExpires
00007ff9567add20 1498760 95920640 System.Threading.TimerCallback
00007ff9545dfa90 1332184 106574720 System.Runtime.Caching.MemoryCacheStore
00007ff95679b3b0 1333289 106663120 System.Collections.Hashtable
00007ff95678f138 1536171 110604312 System.Runtime.Remoting.Messaging.LogicalCallContext
00007ff9545dffb0 1332184 127889664 System.Runtime.Caching.UsageBucket
00007ff95679d1e0 1333292 128664768 System.Collections.Hashtable+bucket[]
00007ff9567245c0 1498777 131892376 System.Threading.TimerQueueTimer
00007ff9567aec48 1536255 135190440 System.Threading.ExecutionContext
00007ff9545dcf78 1332184 351696576 System.Runtime.Caching.ExpiresBucket[]
000000f82c79d9f0 473339 385303992 Free
00007ff956799220 40309535 1617342672 System.Int32[]
00007ff9545e0468 39965520 3836689920 System.Runtime.Caching.ExpiresBucket
So there are nearly 40 million instances of System.Runtime.Caching.ExpiresBucket, totally nearly 4gb of the used memory. System.Runtime.Caching classes appear quite a lot in the top offenders.
I took a random instance of a System.Runtime.Caching.ExpiresBucket class, and did a !gcroot on it. It took ages (maybe 30 mins) to produce 1 thread...there may have been more but I interrupted the operation at this point.
The chain of references is over 1.5 million lines long! But I can show the important bits here:
0:000> !gcroot 000000f82dd4bc28
Thread 1964:
000000fcbdbce6a0 00007ff8f9bbe388 Microsoft.AspNet.SignalR.SqlServer.ObservableDbOperation.ExecuteReaderWithUpdates(System.Action`2<System.Data.IDataRecord,Microsoft.AspNet.SignalR.SqlServer.DbOperation>)
rbp-58: 000000fcbdbce6e8
-> 000000fa2d1f26a0 Microsoft.AspNet.SignalR.SqlServer.ObservableDbOperation+<>c__DisplayClass1e
-> 000000fa2d1f2110 Microsoft.AspNet.SignalR.SqlServer.ObservableDbOperation
-> 000000fa2d1f24d0 System.Action
-> 000000fa2d1f24a8 System.Object[]
-> 000000fa2d1f2468 System.Action
-> 000000fa2d1f1008 Microsoft.AspNet.SignalR.SqlServer.SqlReceiver
-> 000000fa2d1f1330 System.Action
-> 000000fa2d1f1308 System.Object[]
-> 000000fa2d1f12c8 System.Action
-> 000000fa2d1efb70 Microsoft.AspNet.SignalR.SqlServer.SqlStream
-> 000000fa2d1f1528 System.Action
-> 000000fa2d1f1500 System.Object[]
-> 000000fa2d1f14c0 System.Action
-> 000000fa2d1efb20 Microsoft.AspNet.SignalR.SqlServer.SqlMessageBus+<>c__DisplayClass3
-> 000000f92d0b84e0 Microsoft.AspNet.SignalR.SqlServer.SqlMessageBus
-> 000000f92d0b9568 System.Threading.Timer
-> 000000f92d0b96d8 System.Threading.TimerHolder
-> 000000f92d0b95a0 System.Threading.TimerQueueTimer
[... about 100 lines of the same TimerQueueTimer line above, but different memory addresses each time]
-> 000000f92cf1be68 System.Threading.TimerQueueTimer
-> 000000f92cf1be08 System.Threading.TimerCallback
-> 000000f92cf1bb48 System.Web.RequestTimeoutManager
-> 000000f92cf1bb80 System.Web.Util.DoubleLinkList[]
-> 000000f92cf1bc00 System.Web.Util.DoubleLinkList
-> 000000fb61323860 System.Web.RequestTimeoutManager+RequestTimeoutEntry
-> 000000fb6131fd38 System.Web.HttpContext
-> 000000fbe682a480 ASP.global_asax
-> 000000fbe682ac00 System.Web.HttpModuleCollection
-> 000000fbe682ac60 System.Collections.ArrayList
-> 000000fbe682b598 System.Object[]
-> 000000fbe682b018 System.Collections.Specialized.NameObjectCollectionBase+NameObjectEntry
-> 000000fbe682b000 System.Web.Routing.UrlRoutingModule
-> 000000faacec1f40 System.Web.Routing.RouteCollection
-> 000000faacec2030 System.Collections.Generic.List`1[[System.Web.Routing.RouteBase, System.Web]]
-> 000000fa2cfe4d80 System.Web.Routing.RouteBase[]
-> 000000f9acf14cd8 System.Web.Http.WebHost.Routing.HttpWebRoute
-> 000000f9acf149f8 System.Web.Http.Routing.RouteCollectionRoute
-> 000000f9acf1f4f0 System.Web.Http.Routing.SubRouteCollection
-> 000000f9acf1f510 System.Collections.Generic.List`1[[System.Web.Http.Routing.IHttpRoute, System.Web.Http]]
-> 000000fa2cf8f310 System.Web.Http.Routing.IHttpRoute[]
-> 000000fa2ceff770 System.Web.Http.Routing.HttpRoute
-> 000000fa2ceff678 System.Web.Http.Routing.HttpRouteValueDictionary
-> 000000fa2ceff6f0 System.Collections.Generic.Dictionary`2+Entry[[System.String, mscorlib],[System.Object, mscorlib]][]
-> 000000fa2cef9e78 System.Web.Http.Controllers.HttpActionDescriptor[]
-> 000000fa2cef7898 System.Web.Http.Controllers.ReflectedHttpActionDescriptor
-> 000000f9aced4608 System.Web.Http.HttpConfiguration
-> 000000f9aced4db0 System.Net.Http.Formatting.MediaTypeFormatterCollection
-> 000000f9aced6f40 System.Collections.Generic.List`1[[System.Net.Http.Formatting.MediaTypeFormatter, System.Net.Http.Formatting]]
-> 000000f9aced6f80 System.Net.Http.Formatting.MediaTypeFormatter[]
-> 000000f9aced4df8 System.Net.Http.Formatting.JsonMediaTypeFormatter
-> 000000f9acf1f448 System.Web.Http.Validation.ModelValidationRequiredMemberSelector
-> 000000f9acf1f468 System.Collections.Generic.List`1[[System.Web.Http.Validation.ModelValidatorProvider, System.Web.Http]]
-> 000000f9acf1f490 System.Web.Http.Validation.ModelValidatorProvider[]
-> 000000f9acf1db40 Ninject.Web.WebApi.Validation.NinjectDefaultModelValidatorProvider
-> 000000faaceca438 Ninject.StandardKernel
-> 000000faaceca498 Ninject.Components.ComponentContainer
-> 000000faaceca538 System.Collections.Generic.Dictionary`2[[System.Type, mscorlib],[Ninject.Components.INinjectComponent, Ninject]]
-> 000000f9acece000 System.Collections.Generic.Dictionary`2+Entry[[System.Type, mscorlib],[Ninject.Components.INinjectComponent, Ninject]][]
-> 000000f9acecdac8 Ninject.Activation.Caching.GarbageCollectionCachePruner
-> 000000f9acecdcb8 System.Threading.Timer
-> 000000f9acecdd30 System.Threading.TimerHolder
-> 000000f9acecdcd8 System.Threading.TimerQueueTimer
[... just under 1.5 million lines of the same TimerQueueTimer line above, but different memory addresses each time]
-> 000000f82dd4c028 System.Threading.TimerQueueTimer
-> 000000f82dd4bfc8 System.Threading.TimerCallback
-> 000000f82dd4ada0 System.Runtime.Caching.CacheExpires
-> 000000f82dd4add8 System.Runtime.Caching.ExpiresBucket[]
-> 000000f82dd4bc28 System.Runtime.Caching.ExpiresBucket
Running !objsize on the 000000faaceca438 Ninject.StandardKernel seems to take forever, implying that it references an awful lot of objects, possibly all 40 million of those System.Runtime.Caching.ExpiresBucket objects...
What is causing the leak? How should I go about identifying the offending class or code? There is no reference to any of our own code in the gcroot output, so is it due to a bug in one of the installed libraries we're using? Is it in Ninject? We're using v3.2.2 (not the latest, I know)
Posting as an answer because it's too long for a comment:
Seems to me Ninject is tracking an awful lot of scope-objects. AFAIR that's what the GarbageCollectionPruner is for. A scope object is something that is defined with .InScope(...this object here...) or with some of the overloads like InRequestScope().
GarbageCollectionPruner has a timer that periodically checks whether the scope is still "alive". And if it is not alive anymore (garbage collected) it will dispose & forget all objects associated with that scope.
Unless there's not a bug in ninject, this would mean your application is creating an awful lot of scopes in a short interval or they are not properly cleaned up (meaning: problem with your or other 3rd party code).
By the way, if the scope object implements INotifyWhenDisposed (ninject interface) the periodical IsAlive check is not necessary and also gives the benefit of "determnistic" disposal, i.E. just when the scope ends the scoped objects are disposed, too. Otherwise this is dependent on the GC and the timer in Ninject...
I'm pretty new to SPIN and Promela and I came across this error when I'm trying to verify the liveness property in my models.
Error code:
unreached in proctype P
(0 of 29 states)
unreached in proctype monitor
mutex_assert.pml:39, state 1, "assert(!((mutex>1)))"
mutex_assert.pml:42, state 2, "-end-"
(2 of 2 states)
unreached in init
(0 of 3 states)
unreached in claim ltl_0
_spin_nvr.tmp:10, state 13, "-end-"
(1 of 13 states)
pan: elapsed time 0 seconds
The code is basically an implementation of Peterson's algorithm and I checked for safety and it seems to be valid. But whenever I try to validate the liveness property using the ltl {[]((wait -> <> (cs)))}, it comes up with the above errors. I'm not sure what they mean so I don't know how to proceed...
My code is as follows:
#define N 3
#define wait (P[1]#WAIT)
#define cs (P[1]#CRITICAL)
int pos[N];
int step[N];
int enter;
byte mutex;
ltl {[]((wait -> <> (cs)))}
proctype P(int i) {
int t;
int k;
WAIT:
for (t : 1 .. (N-1)){
pos[i] = t
step[t] = i
k = 0;
do
:: atomic {(k != i && k < N && (pos[k] < t|| step[t] != i)) -> k++}
:: atomic {k == i -> k++}
:: atomic {k == N -> break}
od;
}
CRITICAL:
atomic {mutex++;
printf("MSC: P(%d) HAS ENTERED THE CRITICAL SECTION.\n", i);
mutex--;}
pos[i] = 0;
}
init {
atomic { run P(0); }
}
General Answer
This is a warning telling you that some states are unreachable due to transitions that are never taken.
In general, this is not an error, but it is a good practice to take a close look to the unreachable states for every routine that you modelled, and check that you expect none of them to be reachable. i.e. in the case the model is not correct wrt. the intended behaviour.
Note. You can use the label end: in front of a particular line of code to mark valid terminating states, so to get rid of those warnings, e.g. when your procedure does not terminate. More info here.
Specific Answer
I can not reproduce your output. In particular, by running
~$ spin -a file.pml
~$ gcc pan.c
~$ ./a.out -a
I get the following output, which is different from yours:
(Spin Version 6.4.3 -- 16 December 2014)
+ Partial Order Reduction
Full statespace search for:
never claim + (ltl_0)
assertion violations + (if within scope of claim)
acceptance cycles + (fairness disabled)
invalid end states - (disabled by never claim)
State-vector 64 byte, depth reached 47, errors: 0
41 states, stored (58 visited)
18 states, matched
76 transitions (= visited+matched)
0 atomic steps
hash conflicts: 0 (resolved)
Stats on memory usage (in Megabytes):
0.004 equivalent memory usage for states (stored*(State-vector + overhead))
0.288 actual memory usage for states
128.000 memory used for hash table (-w24)
0.534 memory used for DFS stack (-m10000)
128.730 total actual memory usage
unreached in proctype P
(0 of 29 states)
unreached in init
(0 of 3 states)
unreached in claim ltl_0
_spin_nvr.tmp:10, state 13, "-end-"
(1 of 13 states)
pan: elapsed time 0 seconds
In particular, I lack the warnings about the unreached states in the monitor process. As far as I am concerned, juding from the source code, none of the warnings I obtained is problematic.
Either you are using a different version of Spin than me, or you did not include the full source code in your question. In the latter case, could you edit your question and add the code? I'll update my answer afterwards.
EDIT: in the comments, you ask what does the following message mean: "unreached in claim ltl_0 _spin_nvr.tmp:10, state 13, "-end-"".
If you open the file _spin_nvr.tmp, you can see the following piece of Promela code, which corresponds to a Büchi automaton that accepts all and only the execution which violate your ltl property []((wait -> <> (cs))).
never ltl_0 { /* !([] ((! ((P[1]#WAIT))) || (<> ((P[1]#CRITICAL))))) */
T0_init:
do
:: (! ((! ((P[1]#WAIT)))) && ! (((P[1]#CRITICAL)))) -> goto accept_S4
:: (1) -> goto T0_init
od;
accept_S4:
do
:: (! (((P[1]#CRITICAL)))) -> goto accept_S4
od;
}
The message simply warns you that the execution of this code will never reach the last closing bracket } (state "-end-"), meaning that the procedure does never terminate.
Is it possible to get multiple (or all) violation traces for a property using Spin?
As an example, I created the Promela model below:
byte mutex = 0;
active proctype A() {
A1: mutex==0; /* Is free? */
A2: mutex++; /* Get mutex */
A3: /* A's critical section */
A4: mutex--; /* Release mutex */
}
active proctype B() {
B1: mutex==0; /* Is free? */
B2: mutex++; /* Get mutex */
B3: /* B's critical section */
B4: mutex--; /* Release mutex */
}
ltl {[] (mutex < 2)}
It has a naive mutex implementation. One could expect that processes A and B would not reach their critical section together and I wrote an LTL expression to check that.
Running
spin -run mutex_example.pml
shows that the property is not valid and running
spin -p -t mutex_example.pml
show the sequence of statements that violate the property.
Never claim moves to line 4 [(1)]
2: proc 1 (B:1) mutex_example.pml:11 (state 1) [((mutex==0))]
4: proc 0 (A:1) mutex_example.pml:4 (state 1) [((mutex==0))]
6: proc 1 (B:1) mutex_example.pml:12 (state 2) [mutex = (mutex+1)]
8: proc 0 (A:1) mutex_example.pml:5 (state 2) [mutex = (mutex+1)]
spin: _spin_nvr.tmp:3, Error: assertion violated
spin: text of failed assertion: assert(!(!((mutex<2))))
Never claim moves to line 3 [assert(!(!((mutex<2))))]
spin: trail ends after 9 steps
#processes: 2
mutex = 2
9: proc 1 (B:1) mutex_example.pml:14 (state 3)
9: proc 0 (A:1) mutex_example.pml:7 (state 3)
9: proc - (ltl_0:1) _spin_nvr.tmp:2 (state 6)
This shows that the sequence of statements (indicated by labels) 'B1' -> 'A1' -> 'B2' -> 'A2' violate the property but there are other interleaving options leading to that (e.g. 'A1' -> 'B1' -> 'B2' -> 'A2').
Can I ask Spin to give me multiple (or all) traces?
I doubt that you can get all violation traces in Spin.
For example, if we consider the following model, then there are infinitely many counter-examples.
byte mutex = 0;
active [2] proctype P() {
do
:: mutex == 0 ->
mutex++;
/* critical section */
mutex--;
od
}
ltl {[] (mutex <= 1)}
What you can do, is to use different search algorithms for your verifier, and this might yield some different counter-examples
-search (or -run) generate a verifier, and compile and run it
options before -search are interpreted by spin to parse the input
options following a -search are used to compile and run the verifier pan
valid options that can follow a -search argument include:
-bfs perform a breadth-first search
-bfspar perform a parallel breadth-first search
-bcs use the bounded-context-switching algorithm
-bitstate or -bit, use bitstate storage
-biterate use bitstate with iterative search refinement (-w18..-w35)
-swarmN,M like -biterate, but running all iterations in parallel
perform N parallel runs and increment -w every M runs
default value for N is 10, default for M is 1
-link file.c link executable pan to file.c
-collapse use collapse state compression
-hc use hash-compact storage
-noclaim ignore all ltl and never claims
-p_permute use process scheduling order permutation
-p_rotateN use process scheduling order rotation by N
-p_reverse use process scheduling order reversal
-ltl p verify the ltl property named p
-safety compile for safety properties only
-i use the dfs iterative shortening algorithm
-a search for acceptance cycles
-l search for non-progress cycles
similarly, a -D... parameter can be specified to modify the compilation
and any valid runtime pan argument can be specified for the verification
I'm following an example in Daniel Jackson's excellent book (Software Abstractions), specifically the example in which he has a token-ring setup in order to elect a leader.
I'm attempting to extend this example (Ring election) to ensure that the token, instead of being limited to one, is being passed around to all members within the provided time (and each member only being elected once, not multiple times). However (mostly due to my inexperience in Alloy), I'm having issues figuring out the best way. Initially I'd thought that I could play with some of the operators (changing -'s to +'s), but I don't seem to be quite hitting the nail on the head.
Below is the code from the example. I've marked up a few areas with questions...any and all help is appreciated. I'm using Alloy 4.2.
module chapter6/ringElection1 --- the version up to the top of page 181
open util/ordering[Time] as TO
open util/ordering[Process] as PO
sig Time {}
sig Process {
succ: Process,
toSend: Process -> Time,
elected: set Time
}
// ensure processes are in a ring
fact ring {
all p: Process | Process in p.^succ
}
pred init [t: Time] {
all p: Process | p.toSend.t = p
}
//QUESTION: I'd thought that within this predicate and the following fact, that I could
// change the logic from only having one election at a time to all being elected eventually.
// However, I can't seem to get the logic down for this portion.
pred step [t, t': Time, p: Process] {
let from = p.toSend, to = p.succ.toSend |
some id: from.t {
from.t' = from.t - id
to.t' = to.t + (id - p.succ.prevs)
}
}
fact defineElected {
no elected.first
all t: Time-first | elected.t = {p: Process | p in p.toSend.t - p.toSend.(t.prev)}
}
fact traces {
init [first]
all t: Time-last |
let t' = t.next |
all p: Process |
step [t, t', p] or step [t, t', succ.p] or skip [t, t', p]
}
pred skip [t, t': Time, p: Process] {
p.toSend.t = p.toSend.t'
}
pred show { some elected }
run show for 3 Process, 4 Time
// This generates an instance similar to Fig 6.4
//QUESTION: here I'm attempting to assert that ALL Processes have an election,
// however the 'all' keyword has been deprecated. Is there an appropriate command in
// Alloy 4.2 to take the place of this?
assert OnlyOneElected { all elected.Time }
check OnlyOneElected for 10 Process, 20 Time
This network protocol is exactly about how to elect a single process to be the leader, so I don't really understand the meaning of your idea of having "all processes elected eventually".
instead of all elected.Time, you can equivalently write elected.Time = Process (since the type of elected is Process -> Time). This just says that elected.Time (all processes elected at any time step) is exactly the set of all processes, which, obviously, doesn't mean that "only one process is elected", as suggested by the name of your assertion.