how to make a non-initialised variable in Spin? - model-checking

It seems that Promela initialises each variable (by default, to 0, or to the value that is given in the declaration).
How can I declare a variable that is initialised by an unknown value?
The documentation suggests if :: p = 0 :: p = 1 fi but I don't think
that it works: Spin still verifies this claim
bit p
init { if :: p = 0 :: p = 1 fi }
ltl { ! p }
(and falsifies p)
So what exactly is the semantics of init? There still is some
"pre-initial" state? How can I work around this - and not confuse my students?

This is an interesting question.
The documentation says that each and every variable is initialised to 0, unless the model specifies otherwise.
As with all variable declarations, an explicit initialization field is optional. The default initial value for all variables is zero. This applies both to scalar variables and to array variables, and it applies to both global and to local variables.
In your model, you don't initialise the variable when you declare it, therefore it is subsequently assigned to the value 0 in the initial state, which is located before your assignment:
bit p
init {
// THE INITIAL STATE IS HERE
if
:: p = 0
:: p = 1
fi
}
ltl { ! p }
Some Experiment.
A "naive" idea for dodging this limitation would be to modify the c source code of pan.c that is generated by spin when you invoke ~$ spin -a test.pml, so that the variable is initialised at random.
Instead of this initialisation function:
void
iniglobals(int calling_pid)
{
now.p = 0;
#ifdef VAR_RANGES
logval("p", now.p);
#endif
}
one could try writing this:
void
iniglobals(int calling_pid)
{
srand(time(NULL));
now.p = rand() % 2;
#ifdef VAR_RANGES
logval("p", now.p);
#endif
}
and adding an #include <time.h> in the header part.
However, once you compile that into a verifier with gcc pan.c, and you attempt to run it, you obtain non-deterministic behaviour depending on the initialization value of the variable p.
It can both determine that the property is violated:
~$ ./a.out -a
pan:1: assertion violated !( !( !(p))) (at depth 0)
pan: wrote test.pml.trail
(Spin Version 6.4.3 -- 16 December 2014)
Warning: Search not completed
+ Partial Order Reduction
Full statespace search for:
never claim + (ltl_0)
assertion violations + (if within scope of claim)
acceptance cycles + (fairness disabled)
invalid end states - (disabled by never claim)
State-vector 28 byte, depth reached 0, errors: 1
1 states, stored
0 states, matched
1 transitions (= stored+matched)
0 atomic steps
hash conflicts: 0 (resolved)
Stats on memory usage (in Megabytes):
0.000 equivalent memory usage for states (stored*(State-vector + overhead))
0.291 actual memory usage for states
128.000 memory used for hash table (-w24)
0.534 memory used for DFS stack (-m10000)
128.730 total actual memory usage
pan: elapsed time 0 seconds
or print that the property is satisfied:
~$ ./a.out -a
(Spin Version 6.4.3 -- 16 December 2014)
+ Partial Order Reduction
Full statespace search for:
never claim + (ltl_0)
assertion violations + (if within scope of claim)
acceptance cycles + (fairness disabled)
invalid end states - (disabled by never claim)
State-vector 28 byte, depth reached 0, errors: 0
1 states, stored
0 states, matched
1 transitions (= stored+matched)
0 atomic steps
hash conflicts: 0 (resolved)
Stats on memory usage (in Megabytes):
0.000 equivalent memory usage for states (stored*(State-vector + overhead))
0.291 actual memory usage for states
128.000 memory used for hash table (-w24)
0.534 memory used for DFS stack (-m10000)
128.730 total actual memory usage
unreached in init
test.pml:8, state 5, "-end-"
(1 of 5 states)
unreached in claim ltl_0
_spin_nvr.tmp:8, state 8, "-end-"
(1 of 8 states)
pan: elapsed time 0 seconds
Clearly, the initial state of a promela model verified by spin is assumed to be unique. Afterall, that's a reasonable assumption, since it would needlessly complicate things: you can always replace N different initial states S_i with an initial state S s.t. S allows to reach each S_i with an epsilon-transition. In this context, what you get is not truly an epsilon-transition, but in practice it makes little difference.
EDIT (from comments):
In principle, it is possible to make this work by modifying pan.c a little bit further:
transform the initial state initialiser into a generator of initial states
modify the verification routine to take into account that more than one initial state might exist, and that the property must hold for each initial state
Having said this, it is likely not worth the hassle, unless this is done by patching Spin's source code.
Workaround.
If you want to state that something is true in the initial state, or starting from the initial state, and take into account some non-deterministic behaviour, then you should write something as follows:
bit p
bool init_state = false
init {
if
:: p = 0
:: p = 1
fi
init_state = true // TARGET STATE
init_state = false
}
ltl { init_state & ! p }
with which you get:
~$ ./a.out -a
pan:1: assertion violated !( !((initialised& !(p)))) (at depth 0)
pan: wrote 2.pml.trail
(Spin Version 6.4.3 -- 16 December 2014)
Warning: Search not completed
+ Partial Order Reduction
Full statespace search for:
never claim + (ltl_0)
assertion violations + (if within scope of claim)
acceptance cycles + (fairness disabled)
invalid end states - (disabled by never claim)
State-vector 28 byte, depth reached 0, errors: 1
1 states, stored
0 states, matched
1 transitions (= stored+matched)
0 atomic steps
hash conflicts: 0 (resolved)
Stats on memory usage (in Megabytes):
0.000 equivalent memory usage for states (stored*(State-vector + overhead))
0.291 actual memory usage for states
128.000 memory used for hash table (-w24)
0.534 memory used for DFS stack (-m10000)
128.730 total actual memory usage
pan: elapsed time 0 seconds
Init Semantics.
Init is simply guaranteed to be the first process to spawn, and is meant to be used for spawning other processes when, for-example, the other routines take as input some parameters, e.g. some resources are shared. More info here.
I believe that this fragment of documentation is a bit misleading:
The init process is most commonly used to initialize global variables,
and to instantiate other processes, through the use of the run
operator, before system execution starts. Any process, not just the
init process, can do so, though
Since it is possible to guarantee that the init process executes all of its code before any other process using the atomic { } statement, one could say that it can be used to initialize variables before they are used by other processes from the programming point of view. But that is just a rough approximation, because the init process does not correspond to a unique state in the execution model, but rather to the tree of states at the root and the root itself is given only by the global environment as it is before any process starts.

Related

Can you use the built in derivative functions in compute shaders? (vulkan)

I want to use the built in derivative funcitons:
vec3 dpdx = dFdx(p);
vec3 dpdy = dFdy(p);
Inside a compute shader. However I get the following error:
Message ID name: UNASSIGNED-CoreValidation-Shader-InconsistentSpirv
Message: Validation Error: [ UNASSIGNED-CoreValidation-Shader-InconsistentSpirv ] Object 0: handle = 0x5654380d4dd8, name = Logical device: GeForce GT 1030, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0x6bbb14 | SPIR-V module not valid: OpEntryPoint Entry Point <id> '5[%main]'s callgraph contains function <id> 46[%BiplanarMapping_s21_vf3_vf3_f1_], which cannot be used with the current execution modes:
Derivative instructions require DerivativeGroupQuadsNV or DerivativeGroupLinearNV execution mode for GLCompute execution model: DPdx
Derivative instructions require DerivativeGroupQuadsNV or DerivativeGroupLinearNV execution mode for GLCompute execution model: DPdy
%BiplanarMapping_s21_vf3_vf3_f1_ = OpFunction %v4float None %41
Severity: VK_DEBUG_UTILS_MESSAGE_SEVERITY_ERROR_BIT_EXT
I don't seem to find anything on the topic when I search online.
Derivative functions only work in a fragment shader. The derivatives are based on the rate-of-change of the value across the primitive being rendered. Obviously compute shaders don't render primitives, so there is nothing to compute.
Apparently, NVIDIA has an extension that provides some derivative computation capabilities for compute shaders. That's where the weird error comes from.
Derivatives in fragment shaders are computed by subtracting between the same value from adjacent invocations. As such, you can emulate this by using shared variables.
First, you have to make sure that the spatially adjacent invocations are in the same work group. So your work group size needs to be some multiple of 2x2 invocations. Then, you need a shared variable array, which you index by invocations within a work group. Each invocation should write its own value to its own index.
To compute the derivative, issue a barrier (with memoryBarrierShared) after writing the values to the shared variables. Take the difference between one's invocation and the adjacent one in the same 2x2 quad. You should make sure that all invocations in the same quad get the same value, by always subtracting between the lower index and the higher index within the quad. Something like this:
uvec2 quadIndex = gl_LocalInvocationID.xy / 2
/*type*/ derFdX = variable[quadIndex.x + 1][quadIndex.y + 0] - variable[quadIndex.x + 0][quadIndex.y + 0]
/*type*/ derFdY = variable[quadIndex.x + 0][quadIndex.y + 1] - variable[quadIndex.x + 0][quadIndex.y + 0]
The NVIDIA extension basically does this for you, though it's probably more efficient since it wouldn't need the shared variable.

TLA+: Temporal properties not being checked

I've got this toy example and for some reason none of the temporal properties are never asserted. Even ridiculous ones like [](h = 123456) don't fail TLC. What am I not understanding?
intro.tla
----------------------------------------------------- MODULE intro -----------------------------------------------------
EXTENDS Naturals
VARIABLE h
Init == h \in 1..12
Invariants == h \in 1..12
Next == h' = (h%12) + 1
Spec ==
/\ Init
/\ [][Next]_h
\* None of these cause the model checker to fail
/\ (\A i \in 1..15 : []<>(h = i))
/\ []<>(h = 123456)
/\ [](h = 123456)
/\ <>(h = 123456)
/\ [](FALSE)
THEOREM Spec => []Invariants
=======================================================================================================================
intro.cfg
SPECIFICATION Spec
INVARIANTS Invariants
tlc intro
TLC2 Version 2.13 of 18 July 2018 (rev: bfdbe00)
Running breadth-first search Model-Checking with seed -1431825986697619670 with 8 workers on 8 cores with 7131MB heap and 64MB offheap memory (Linux 5.0.0-arch1-1-ARCH amd64, Oracle Corporation 1.8.0_202 x86_64).
Parsing file /home/golly/projects/private/talks-wip/tla/intro.tla
Parsing file /tmp/Naturals.tla
Semantic processing of module Naturals
Semantic processing of module intro
Starting... (2019-03-11 12:20:09)
Computing initial states...
Computed 2 initial states...
Computed 4 initial states...
Computed 8 initial states...
Finished computing initial states: 12 distinct states generated.
Model checking completed. No error has been found.
Estimates of the probability that TLC did not check all reachable states
because two distinct states had the same fingerprint:
calculated (optimistic): val = 7.8E-18
based on the actual fingerprints: val = 1.6E-18
24 states generated, 12 distinct states found, 0 states left on queue.
The depth of the complete state graph search is 0.
The average outdegree of the complete state graph is 0 (minimum is 0, the maximum 0 and the 95th percentile is 0).
Finished in 00s at (2019-03-11 12:20:09)
Behavior specs consist of an initial state (Init) and a next-state formula ([][Next]_h). I believe what's happening here is that the IDE or TLC are seeing those two and ignoring the rest. As it probably should: those additional clauses don't make the behavior violate your properties: they simply say that there are fewer initial states and actions than you thought. If you want to make them properties of your spec, add those clauses to Properties in the Toolbox.

cmm call format for foreign primop (integer-gmp example)

I have been checking out integer-gmp source code to understand how foreign primops can be implemented in terms of cmm as documented on GHC Primops page. I am aware of techniques to implement them using llvm hack or fvia-C/gcc - this is more of a learning experience for me to understand this third approach that interger-gmp library uses.
So, I looked up CMM tutorial on MSFT page (pdf link), went through GHC CMM page, and still there are some unanswered questions (hard to keep all those concepts in head without digging into CMM which is what I am doing now). There is this code fragment from integer-bmp cmm file:
integer_cmm_int2Integerzh (W_ val)
{
W_ s, p; /* to avoid aliasing */
ALLOC_PRIM_N (SIZEOF_StgArrWords + WDS(1), integer_cmm_int2Integerzh, val);
p = Hp - SIZEOF_StgArrWords;
SET_HDR(p, stg_ARR_WORDS_info, CCCS);
StgArrWords_bytes(p) = SIZEOF_W;
/* mpz_set_si is inlined here, makes things simpler */
if (%lt(val,0)) {
s = -1;
Hp(0) = -val;
} else {
if (%gt(val,0)) {
s = 1;
Hp(0) = val;
} else {
s = 0;
}
}
/* returns (# size :: Int#,
data :: ByteArray#
#)
*/
return (s,p);
}
As defined in ghc cmm header:
W_ is alias for word.
ALLOC_PRIM_N is a function for allocating memory on the heap for primitive object.
Sp(n) and Hp(n) are defined as below (comments are mine):
#define WDS(n) ((n)*SIZEOF_W) //WDS(n) calculates n*sizeof(Word)
#define Sp(n) W_[Sp + WDS(n)]//Sp(n) points to Stackpointer + n word offset?
#define Hp(n) W_[Hp + WDS(n)]//Hp(n) points to Heap pointer + n word offset?
I don't understand lines 5-9 (line 1 is the start in case you have 1/0 confusion). More specifically:
Why is the function call format of ALLOC_PRIM_N (bytes,fun,arg) that way?
Why is p manipulated that way?
The function as I understand it (from looking at function signature in Prim.hs) takes an int, and returns a (int, byte array) (stored in s,p respectively in the code).
For anyone who is wondering about inline call in if block, it is cmm implementation of gmp mpz_init_si function. My guess is if you call a function defined in object file through ccall, it can't be inlined (which makes sense since it is object-code, not intermediate code - LLVM approach seems more suitable for inlining through LLVM IR). So, the optimization was to define a cmm representation of the function to be inlined. Please correct me if this guess is wrong.
Explanation of lines 5-9 will be very much appreciated. I have more questions about other macros defined in integer-gmp file, but it might be too much to ask in one post. If you can answer the question with a Haskell wiki page or a blog (you can post the link as answer), that would be much appreciated (and if you do, I would also appreciate step-by-step walk-through of an integer-gmp cmm macro such as GMP_TAKE2_RET1).
Those lines allocate a new ByteArray# on the Haskell heap, so to understand them you first need to know a bit about how GHC's heap is managed.
Each capability (= OS thread that executes Haskell code) has its own dedicated nursery, an area of the heap into which it makes normal, small allocations like this one. Objects are simply allocated sequentially into this area from low addresses to high addresses until the capability tries to make an allocation which exceeds the remaining space in the nursery, which triggers the garbage collector.
All heap objects are aligned to a multiple of the word size, i.e., 4 bytes on 32-bit systems and 8 bytes on 64-bit systems.
The Cmm-level register Hp points to (the beginning of) the last word which has been allocated in the nursery. HpLim points to the last word which can be allocated in the nursery. (HpLim can also be set to 0 by another thread to stop the world for GC, or to send an asynchronous exception.)
https://ghc.haskell.org/trac/ghc/wiki/Commentary/Rts/Storage/HeapObjects has information on the layout of individual heap objects. Notably each heap object begins with an info pointer, which (among other things) identifies what sort of heap object it is.
The Haskell type ByteArray# is implemented with the heap object type ARR_WORDS. An ARR_WORDS object just consists of (an info pointer followed by) a size (in bytes) followed by arbitrary data (the payload). The payload is not interpreted by the GC, so it can't store pointers to Haskell heap objects, but it can store anything else. SIZEOF_StgArrWords is the size of the header common to all ARR_WORDS heap objects, and in this case the payload is just a single word, so SIZEOF_StgArrWords + WDS(1) is the amount of space we need to allocate.
ALLOC_PRIM_N (SIZEOF_StgArrWords + WDS(1), integer_cmm_int2Integerzh, val) expands to something like
Hp = Hp + (SIZEOF_StgArrWords + WDS(1));
if (Hp > HpLim) {
HpAlloc = SIZEOF_StgArrWords + WDS(1);
goto stg_gc_prim_n(integer_cmm_int2Integerzh, val);
}
First line increases Hp by the amount to be allocated. Second line checks for heap overflow. Third line records the amount that we tried to allocate, so the GC can undo it. The fourth line calls the GC.
The fourth line is the most interesting. The arguments tell the GC how to restart the thread once garbage collection is done: it should reinvoke integer_cmm_int2Integerzh with argument val. The "_n" in stg_gc_prim_n (and the "_N" in ALLOC_PRIM_N) means that val is a non-pointer argument (in this case an Int#). If val were a pointer to a Haskell heap object, the GC needs to know that it is live (so it doesn't get collected) and to reinvoke our function with the new address of the object. In that case we'd use the _p variant. There are also variants like _pp for multiple pointer arguments, _d for Double# arguments, etc.
After line 5, we've successfully allocated a block of SIZEOF_StgArrWords + WDS(1) bytes and, remember, Hp points to its last word. So, p = Hp - SIZEOF_StgArrWords sets p to the beginning of this block. Lines 8 fills in the info pointer of p, identifying the newly-created heap object as ARR_WORDS. CCCS is the current cost-center stack, used only for profiling. When profiling is enabled each heap object contains an extra field that basically identifies who is responsible for its allocation. In non-profiling builds, there is no CCCS and SET_HDR just sets the info pointer. Finally, line 9 fills in the size field of the ByteArray#. The rest of the function fills in the payload and return the sign value and the ByteArray# object pointer.
So, this ended up being more about the GHC heap than about the Cmm language, but I hope it helps.
Required knowledge
In order to do arithmetic and logical operations computers have digital circuit called ALU (Arithmetic Logic Unit) in their CPU (Central Processing Unit). An ALU loads data from input registers. Processor register is memory storage in L1 cache (data requests within 3 CPU clock ticks) implemented in SRAM(Static Random-Access Memory) located in CPU chip. A processor often contains several kinds of registers, usually differentiated by the number of bits they can hold.
Numbers are expressed in discrete bits can hold finite number of values. Typically numbers have following primitive types exposed by the programming language (in Haskell):
8 bit numbers = 256 unique representable values
16 bit numbers = 65 536 unique representable values
32 bit numbers = 4 294 967 296 unique representable values
64 bit numbers = 18 446 744 073 709 551 616 unique representable values
Fixed-precision arithmetic for those types has been implemented in hardware. Word size refers to the number of bits that can be processed by a computer's CPU in one go. For x86 architecture this is 32 bits and x64 this is 64 bits.
IEEE 754 defines floating point numbers standard for {16, 32, 64, 128} bit numbers. For example 32 bit point number (with 4 294 967 296 unique values) can hold approximate values [-3.402823e38 to 3.402823e38] with accuracy of at least 7 floating point digits.
In addition
Acronym GMP means GNU Multiple Precision Arithmetic Library and adds support for software emulated arbitrary-precision arithmetic's. Glasgow Haskell Compiler Integer implementation uses this.
GMP aims to be faster than any other bignum library for all operand
sizes. Some important factors in doing this are:
Using full words as the basic arithmetic type.
Using different algorithms for different operand sizes; algorithms that are faster for very big numbers are usually slower for small
numbers.
Highly optimized assembly language code for the most important inner loops, specialized for different processors.
Answer
For some Haskell might have slightly hard to comprehend syntax so here is javascript version
var integer_cmm_int2Integerzh = function(word) {
return WORDSIZE == 32
? goog.math.Integer.fromInt(word))
: goog.math.Integer.fromBits([word.getLowBits(), word.getHighBits()]);
};
Where goog is Google Closure library class used is located in Math.Integer. Called functions :
goog.math.Integer.fromInt = function(value) {
if (-128 <= value && value < 128) {
var cachedObj = goog.math.Integer.IntCache_[value];
if (cachedObj) {
return cachedObj;
}
}
var obj = new goog.math.Integer([value | 0], value < 0 ? -1 : 0);
if (-128 <= value && value < 128) {
goog.math.Integer.IntCache_[value] = obj;
}
return obj;
};
goog.math.Integer.fromBits = function(bits) {
var high = bits[bits.length - 1];
return new goog.math.Integer(bits, high & (1 << 31) ? -1 : 0);
};
That is not totally correct as return type should be return (s,p); where
s is value
p is sign
In order to fix this GMP wrapper should be created. This has been done in Haskell to JavaScript compiler project (source link).
Lines 5-9
ALLOC_PRIM_N (SIZEOF_StgArrWords + WDS(1), integer_cmm_int2Integerzh, val);
p = Hp - SIZEOF_StgArrWords;
SET_HDR(p, stg_ARR_WORDS_info, CCCS);
StgArrWords_bytes(p) = SIZEOF_W;
Are as follows
allocates space as new word
creates pointer to it
set pointer value
set pointer type size

Meaning of following syntax of cuda Kernel

What is meaning of following syntax:
Kernel_fun<<<256, 128, 2056>>>(arg1, arg2, arg3);
Which value indicates workgroup and which value indicates thread.
From the CUDA Programming Guide, appendix B.22 (as of May 2019):
The execution configuration is specified by inserting an expression of
the form <<< Dg, Db, Ns, S >>> between the function name and the
parenthesized argument list, where:
Dg is of type dim3 (see Section B.3.2) and specifies the dimension and size of the grid, such that Dg.x * Dg.y * Dg.z equals the number
of blocks being launched; Dg.z must be equal to 1 for devices of
compute capability 1.x;
Db is of type dim3 (see Section B.3.2) and specifies the dimension and size of each block, such that Db.x * Db.y * Db.z equals the
number of threads per block;
Ns is of type size_t and specifies the number of bytes in shared memory that is dynamically allocated per block for this call in
addition to the statically allocated memory; this dynamically
allocated memory is used by any of the variables declared as an
external array as mentioned in Section B.2.3; Ns is an optional
argument which defaults to 0;
S is of type cudaStream_t and specifies the associated stream; S is an optional argument which defaults to 0.
In short:
<<< number of blocks, number of threads, dynamic memory per block, associated stream >>>

Do atomic operations work the same across processes as they do across threads?

Obviously, atomic operations make sure that different threads don't clobber a value. But is this still true across processes, when using shared memory? Even if the processes happen to be scheduled by the OS to run on different cores? Or across different distinct CPUs?
Edit: Also, if it's not safe, is it not safe even on an operating system like Linux, where processes and threads are the same from the scheduler's point of view?
tl;dr: Read the fine print in the documentation of the atomic operations. Some will be atomic by design but may trip over certain variable types. In general, though, an atomic operation will maintain its contract between different processes just as it does between threads.
An atomic operation really only ensures that you won't have an inconsistent state if called by two entities simultaneously. For example, an atomic increment that is called by two different threads or processes on the same integer will always behave like so:
x = initial value (zero for the sake of this discussion)
Entity A increments x and returns the result to itself: result = x = 1.
Entity B increments x and returns the result to itself: result = x = 2.
where A and B indicate the first and second thread or process that makes the call.
A non-atomic operation can result in inconsistent or generally crazy results due to race conditions, incomplete writes to the address space, etc. For example, you can easily see this:
x = initial value = zero again.
Entity A calls x = x + 1. To evaluate x + 1, A checks the value of x (zero) and adds 1.
Entity B calls x = x + 1. To evaluate x + 1, B checks the value of x (still zero) and adds 1.
Entity B (by luck) finishes first and assigns the result of x + 1 = 1 (step 3) to x. x is now 1.
Entity A finishes second and assigns the result of x + 1 = 1 (step 2) to x. x is now 1.
Note the race condition as entity B races past A and completes the expression first.
Now imagine if x were a 64-bit double that is not ensured to have atomic assignments. In that case you could easily see something like this:
A 64 bit double x = 0.
Entity A tries to assign 0x1122334455667788 to x. The first 32 bits are assigned first, leaving x with 0x1122334400000000.
Entity B races in and assigns 0xffeeddccbbaa9988 to x. By chance, both 32 bit halves are updated and x is now = 0xffeeddccbbaa9988.
Entity A completes its assignment with the second half and x is now = 0xffeeddcc55667788.
These non-atomic assignments are some of the most hideous concurrent bugs you'll ever have to diagnose.

Resources