Understanding an Error Trail from Spin Modelchecker - model-checking

I am trying to use Spin Model Checker to modelcheck a Game between two objects (A and B). The objects move on a board, and each location is defined by its (x,y) coordinates. The two objects are supposed to not collide. I have three processes: init, A Model, B Model. I am model checking an ltl property: (liveness property to check if the two objects ever occupy same location)
ltl prop1 { [] (!(x_a == x_b) && !(y_a == y_b)) }
The error trail that I get is:
init -> A Model -> B Model -> init
However, I should not get an error trail (counterexample) based on the data that is shown: x_a=2, x_b=1, y_a=1, y_b=1.
Also the first init does go through all the lines of init process, but the second one only shows to the last line of it.
Also my A Model and B Model only consist of guards and actions in a 'do' block as shown below. However they are more complex and have if blocks on the right of '->'
active proctype AModel(){
do
:: someValue == 1 -> go North
:: someValue == 2 -> go South
:: someValue == 3 -> go East
:: someValue == 4 -> go West
:: else -> skip;
od
}
Do I need to put anything in an atomic block? The reason I am asking is that the line that the error trail is showing does not even go into the 'do' block, and it is just the first line of the two models.
EDIT:
The LTL property was wrong. I changed that to:
ltl prop1 { [] (!((x_a == x_b) && (y_a == y_b))) }
However, I am still getting the exact same error trail.

Your LTL property is wrongly implemented. Essentially, the counter example that SPIN found is a true counter example for the LTL as stated.
[] ( !(x_a == x_b) && !(y_z == y_b) ) =>
[] ( !(2 == 1) && !(1 == 1) ) =>
[] ( !0 && !1) =>
[] ( 1 && 0) =>
[] 0 =>
false
The LTL should be:
always not (same location) =>
[] (! ((x_a == x_b) && (y_a == y_b))) =>
[] (! ((2 == 1) && (1 == 1))) =>
[] (! (0 && 1) =>
[] (! 0) =>
[] 1 =>
true
Regarding your init and tasks. When starting your tasks you want to be sure that initialization is complete before the tasks run. I'll use one of two approaches:
init { ... atomic { run taskA(); run taskB() } where tasks are spawned once all initialization is complete`
or
bool init_complete = false;
init { ...; init_complete = true }
proctype taskA () { /* init local stuff */ ...; init-complete -> /* begin real works */ ... }
Your LTL may be failing during the initialization.
And based on your problem, if you ever change x or y you'd better change both at once in an atomic{}.

Related

Alloy Analyzer element comparision from set

Some background: my project is to make a compiler that compiles from a c-like language to Alloy. The input language, that has c-like syntax, must support contracts. For now, I am trying to implement if statements that support pre and post condition statements, similar to the following:
int x=2
if_preCondition(x>0)
if(x == 2){
x = x + 1
}
if_postCondtion(x>0)
The problem is that I am a bit confused with the results of Alloy.
sig Number{
arg1: Int,
}
fun addOneConditional (x : Int) : Number{
{ v : Number |
v.arg1 = ((x = 2 ) => x.add[1] else x)
}
}
assert conditionalSome {
all n: Number| (n.arg1 = 2 ) => (some field: addOneConditional[n.arg1] | { field.arg1 = n.arg1.add[1] })
}
assert conditionalAll {
all n: Number| (n.arg1 = 2 ) => (all field: addOneConditional[n.arg1] | { field.arg1 = n.arg1.add[1] })
}
check conditionalSome
check conditionalAll
In the above example, conditionalAll does not generate any Counterexample. However, conditionalSomegenerates Counterexamples. If I understand all and some quantifiers correctly then there is a mistake. Because from mathematical logic we have Ɐx expr(x) => ∃x expr(x) ( i.e. If expression expr(x) is true for all values of x then there exist a single x for which expr(x) is true)
The first thing is that you need to model your pre-, post- and operations. Functions are terrible for that because they cannot not return something that indicates failure. You, therefore, need to model the operation as a predicate. The value of a predicate indicates if the pre/post are satisfied, the arguments and return values can be modeled as parameters.
This is as far as I can understand your operation:
pred add[ x : Int, x' : Int ] {
x > 0 -- pre condition
x = 2 =>
x'=x.plus[1]
else
x'=x
x' > 0 -- post condition
}
Alloy has no writable variables (Electrum has) so you need to model the before and after state with a prime (') character.
We can now use this predicate to calculate the set of solutions to your problem:
fun solutions : set Int {
{ x' : Int | some x : Int | add[ x,x' ] }
}
We create a set with integers for which we have a result. The prime character is nothing special in Alloy, it is only a convention for the post-state. I am abusing it slightly here.
This is more than enough Alloy source to make mistakes so let's test this.
run { some solutions }
If you run this then you'll see in the Txt view:
skolem $solutions={1, 3, 4, 5, 6, 7}
This is as expected. The add operation only works for positive numbers. Check. If the input is 2, the result is 3. Ergo, 2 can never be a solution. Check.
I admit, I am slight confused by what you're doing in your asserts. I've tried to replicate them faithfully, although I've removed unnecessary things, at least I think we're unnecessary. First your some case. Your code was doing an all but then selecting on 2. So removed the outer quantification and hardcoded 2.
check somen {
some x' : solutions | 2.plus[1] = x'
}
This indeed does not give us any counterexample. Since solutions was {1, 3, 4, 5, 6, 7}, 2+1=3 is in the set, i.e. the some condition is satisfied.
check alln {
all x' : solutions | 2.plus[1] = x'
}
However, not all solutions have 3 as the answer. If you check this, I get the following counter-example:
skolem $alln_x'={7}
skolem $solutions={1, 3, 4, 5, 6, 7}
Conclusion. Daniel Jackson advises not to learn Alloy with Ints. Looking at your Number class you took him literally: you still base your problem on Ints. What he meant is not use Int, don't hide them under the carpet in a field. I understand where Daniel is coming from but Ints are very attractive since we're so familiar with them. However, if you use Ints, let them at least use their full glory and don't hide them.
Hope this helps.
And the whole model:
pred add[ x : Int, x' : Int ] {
x > 0 -- pre condition
x = 2 =>
x'=x.plus[1]
else
x'=x
x' > 0 -- post condition
}
fun solutions : set Int { { x' : Int | some x : Int | add[ x,x' ] } }
run { some solutions }
check somen { some x' : solutions | x' = 3 }
check alln { all x' : solutions | x' = 3 }

how to model a queue in promela?

Ok, so I'm trying to model a CLH-RW lock in Promela.
The way the lock works is simple, really:
The queue consists of a tail, to which both readers and writers enqueue a node containing a single bool succ_must_wait they do so by creating a new node and CAS-ing it with the tail.
The tail thereby becomes the node's predecessor, pred.
Then they spin-wait on pred.succ_must_wait until it is false.
Readers first increment a reader counter ncritR and then set their own flag to false, allowing multiple readers at in the critical section at the same time. Releasing a readlock simply means decrementing ncritR again.
Writers wait until ncritR reaches zero, then enter the critical section. They do not set their flag to false until the lock is released.
I'm kind of struggling to model this in promela, though.
My current attempt (see below) tries to make use of arrays, where each node basically consists of a number of array entries.
This fails because let's say A enqueues itself, then B enqueues itself. Then the queue will look like this:
S <- A <- B
Where S is a sentinel node.
The problem now is, that when A runs to completeness and re-enqueues, the queue will look like
S <- A <- B <- A'
In actual execution, this is absolutely fine because A and A' are distinct node objects. And since A.succ_must_wait will have been set to false when A first released the lock, B will eventually make progress, and therefore A' will eventually make progress.
What happens in the array-based promela model below, though, is that A and A' occupy the same array positions, causing B to miss the fact that A has released the lock, thereby creating a deadlock where B is (wrongly) waiting for A' instead of A and A' is waiting (correctly) for B.
A possible "solution" to this could be to have A wait until B acknowledges the release. But that would not be true to how the lock works.
Another "solution" would be to wait for a CHANGE in pred.succ_must_wait, where a release would increment succ_must_wait, rather than reset it to 0.
But I'm intending to model a version of the lock, where pred may change (i.e. where a node may be allowed to disregard some of its predecessors), and I'm not entirely convinced something like the increasing version wouldn't cause an issue with this change.
So what's the "smartest" way to model an implicit queue like this in promela?
/* CLH-RW Lock */
/*pid: 0 = init, 1-2 = reader, 3-4 = writer*/
ltl liveness{
([]<> reader[1]#progress_reader)
&& ([]<> reader[2]#progress_reader)
&& ([]<> writer[3]#progress_writer)
&& ([]<> writer[4]#progress_writer)
}
bool initialised = 0;
byte ncritR;
byte ncritW;
byte tail;
bool succ_must_wait[5]
byte pred[5]
init{
assert(_pid == 0);
ncritR = 0;
ncritW = 0;
/*sentinel node*/
tail =0;
pred[0] = 0;
succ_must_wait[0] = 0;
initialised = 1;
}
active [2] proctype reader()
{
assert(_pid >= 1);
(initialised == 1)
do
:: else ->
succ_must_wait[_pid] = 1;
atomic {
pred[_pid] = tail;
tail = _pid;
}
(succ_must_wait[pred[_pid]] == 0)
ncritR++;
succ_must_wait[_pid] = 0;
atomic {
/*freeing previous node for garbage collection*/
pred[_pid] = 0;
}
/*CRITICAL SECTION*/
progress_reader:
assert(ncritR >= 1);
assert(ncritW == 0);
ncritR--;
atomic {
/*necessary to model the fact that the next access creates a new queue node*/
if
:: tail == _pid -> tail = 0;
:: else ->
fi
}
od
}
active [2] proctype writer()
{
assert(_pid >= 1);
(initialised == 1)
do
:: else ->
succ_must_wait[_pid] = 1;
atomic {
pred[_pid] = tail;
tail = _pid;
}
(succ_must_wait[pred[_pid]] == 0)
(ncritR == 0)
atomic {
/*freeing previous node for garbage collection*/
pred[_pid] = 0;
}
ncritW++;
/* CRITICAL SECTION */
progress_writer:
assert(ncritR == 0);
assert(ncritW == 1);
ncritW--;
succ_must_wait[_pid] = 0;
atomic {
/*necessary to model the fact that the next access creates a new queue node*/
if
:: tail == _pid -> tail = 0;
:: else ->
fi
}
od
}
First of all, a few notes:
You don't need to initialize your variables to 0, since:
The default initial value of all variables is zero.
see the docs.
You don't need to enclose a single instruction inside an atomic {} statement, since any elementary statement is executed atomically. For better efficiency of the verification process, whenever possible, you should use d_step {} instead. Here you can find a related stackoverflow Q/A on the topic.
init {} is guaranteed to have _pid == 0 when one of the two following conditions holds:
no active proctype is declared
init {} is declared before any other active proctype appearing in the source code
Active Processes, includig init {}, are spawned in order of appearance inside the source code. All other processes are spawned in order of appearance of the corresponding run ... statement.
I identified the following issues on your model:
the instruction pred[_pid] = 0 is useless because that memory location is only read after the assignment pred[_pid] = tail
When you release the successor of a node, you set succ_must_wait[_pid] to 0 only and you don't invalidate the node instance onto which your successor is waiting for. This is the problem that you identified in your question, but was unable to solve. The solution I propose is to add the following code:
pid j;
for (j: 1..4) {
if
:: pred[j] == _pid -> pred[j] = 0;
:: else -> skip;
fi
}
This should be enclosed in an atomic {} block.
You correctly set tail back to 0 when you find that the node that has just left the critical section is also the last node in the queue. You also correctly enclose this operation in an atomic {} block. However, it may happen that --when you are about to enter this atomic {} block-- some other process --who was still waiting in some idle state-- decides to execute the initial atomic block and copies the current value of tail --which corresponds to the node that has just expired-- into his own pred[_pid] memory location. If now the node that has just exited the critical section attempts to join it once again, setting his own value of succ_must_wait[_pid] to 1, you will get another instance of circular wait among processes. The correct approach is to merge this part with the code releasing the successor.
The following inline function can be used to release the successor of a given node:
inline release_succ(i)
{
d_step {
pid j;
for (j: 1..4) {
if
:: pred[j] == i ->
pred[j] = 0;
:: else ->
skip;
fi
}
succ_must_wait[i] = 0;
if
:: tail == _pid -> tail = 0;
:: else -> skip;
fi
}
}
The complete model, follows:
byte ncritR;
byte ncritW;
byte tail;
bool succ_must_wait[5];
byte pred[5];
init
{
skip
}
inline release_succ(i)
{
d_step {
pid j;
for (j: 1..4) {
if
:: pred[j] == i ->
pred[j] = 0;
:: else ->
skip;
fi
}
succ_must_wait[i] = 0;
if
:: tail == _pid -> tail = 0;
:: else -> skip;
fi
}
}
active [2] proctype reader()
{
loop:
succ_must_wait[_pid] = 1;
d_step {
pred[_pid] = tail;
tail = _pid;
}
trying:
(succ_must_wait[pred[_pid]] == 0)
ncritR++;
release_succ(_pid);
// critical section
progress_reader:
assert(ncritR > 0);
assert(ncritW == 0);
ncritR--;
goto loop;
}
active [2] proctype writer()
{
loop:
succ_must_wait[_pid] = 1;
d_step {
pred[_pid] = tail;
tail = _pid;
}
trying:
(succ_must_wait[pred[_pid]] == 0) && (ncritR == 0)
ncritW++;
// critical section
progress_writer:
assert(ncritR == 0);
assert(ncritW == 1);
ncritW--;
release_succ(_pid);
goto loop;
}
I added the following properties to the model:
p0: the writer with _pid equal to 4 goes through its progress state infinitely often, provided that it is given the chance to execute some instruction infinitely often:
ltl p0 {
([]<> (_last == 4)) ->
([]<> writer[4]#progress_writer)
};
This property should be true.
p1: there is never more than one reader in the critical section:
ltl p1 {
([] (ncritR <= 1))
};
Obviously, we expect this property to be false in a model that matches your specification.
p2: there is never more than one writer in the critical section:
ltl p2 {
([] (ncritW <= 1))
};
This property should be true.
p3: there isn't any node that is the predecessor of two other nodes at the same time, unless such node is node 0:
ltl p3 {
[] (
(((pred[1] != 0) && (pred[2] != 0)) -> (pred[1] != pred[2])) &&
(((pred[1] != 0) && (pred[3] != 0)) -> (pred[1] != pred[3])) &&
(((pred[1] != 0) && (pred[4] != 0)) -> (pred[1] != pred[4])) &&
(((pred[2] != 0) && (pred[3] != 0)) -> (pred[2] != pred[3])) &&
(((pred[2] != 0) && (pred[4] != 0)) -> (pred[2] != pred[4])) &&
(((pred[3] != 0) && (pred[4] != 0)) -> (pred[3] != pred[4]))
)
};
This property should be true.
p4: it is always true that whenever writer with _pid equal to 4 tries to access the critical section then it will eventually get there:
ltl p4 {
[] (writer[4]#trying -> <> writer[4]#progress_writer)
};
This property should be true.
The outcome of the verification matches our expectations:
~$ spin -search -ltl p0 -a clhrw_lock.pml
...
Full statespace search for:
never claim + (p0)
assertion violations + (if within scope of claim)
acceptance cycles + (fairness disabled)
invalid end states - (disabled by never claim)
State-vector 68 byte, depth reached 3305, errors: 0
...
~$ spin -search -ltl p1 -a clhrw_lock.pml
...
Full statespace search for:
never claim + (p1)
assertion violations + (if within scope of claim)
acceptance cycles + (fairness disabled)
invalid end states - (disabled by never claim)
State-vector 68 byte, depth reached 1692, errors: 1
...
~$ spin -search -ltl p2 -a clhrw_lock.pml
...
Full statespace search for:
never claim + (p2)
assertion violations + (if within scope of claim)
acceptance cycles + (fairness disabled)
invalid end states - (disabled by never claim)
State-vector 68 byte, depth reached 3115, errors: 0
...
~$ spin -search -ltl p3 -a clhrw_lock.pml
...
Full statespace search for:
never claim + (p3)
assertion violations + (if within scope of claim)
acceptance cycles + (fairness disabled)
invalid end states - (disabled by never claim)
State-vector 68 byte, depth reached 3115, errors: 0
...
~$ spin -search -ltl p4 -a clhrw_lock.pml
...
Full statespace search for:
never claim + (p4)
assertion violations + (if within scope of claim)
acceptance cycles + (fairness disabled)
invalid end states - (disabled by never claim)
State-vector 68 byte, depth reached 3115, errors: 0
...

LTL properties and promela program

I have the following program that models a FIFO with a process in PROMELA:
mtype = { PUSH, POP, IS_EMPTY, IS_FULL };
#define PRODUCER_UID 0
#define CONSUMER_UID 1
proctype fifo(chan inputs, outputs)
{
mtype command;
int data, tmp, src_uid;
bool data_valid = false;
do
:: true ->
inputs?command(tmp, src_uid);
if
:: command == PUSH ->
if
:: data_valid ->
outputs!IS_FULL(true, src_uid);
:: else ->
data = tmp
data_valid = true;
outputs!PUSH(data, src_uid);
fi
:: command == POP ->
if
:: !data_valid ->
outputs!IS_EMPTY(true, src_uid);
:: else ->
outputs!POP(data, src_uid);
data = -1;
data_valid = false;
fi
:: command == IS_EMPTY ->
outputs!IS_EMPTY(!data_valid, src_uid);
:: command == IS_FULL ->
outputs!IS_FULL(data_valid, src_uid);
fi;
od;
}
proctype producer(chan inputs, outputs)
{
mtype command;
int v;
do
:: true ->
atomic {
inputs!IS_FULL(false, PRODUCER_UID) ->
outputs?IS_FULL(v, PRODUCER_UID);
}
if
:: v == 1 ->
skip
:: else ->
select(v: 0..16);
printf("P[%d] - produced: %d\n", _pid, v);
access_fifo:
atomic {
inputs!PUSH(v, PRODUCER_UID);
outputs?command(v, PRODUCER_UID);
}
assert(command == PUSH);
fi;
od;
}
proctype consumer(chan inputs, outputs)
{
mtype command;
int v;
do
:: true ->
atomic {
inputs!IS_EMPTY(false, CONSUMER_UID) ->
outputs?IS_EMPTY(v, CONSUMER_UID);
}
if
:: v == 1 ->
skip
:: else ->
access_fifo:
atomic {
inputs!POP(v, CONSUMER_UID);
outputs?command(v, CONSUMER_UID);
}
assert(command == POP);
printf("P[%d] - consumed: %d\n", _pid, v);
fi;
od;
}
init {
chan inputs = [0] of { mtype, int, int };
chan outputs = [0] of { mtype, int, int };
run fifo(inputs, outputs); // pid: 1
run producer(inputs, outputs); // pid: 2
run consumer(inputs, outputs); // pid: 3
}
I want to add wr_ptr and rd_ptr in the program to indicate write and read pointers relative to the depth of FIFO when a PUSH update is performed:
wr_ptr = wr_ptr % depth;
empty=0;
if
:: (rd_ptr == wr_ptr) -> full=true;
fi
and similar chances on POP updates
Could you please help me to add this to this program?
or should i make it an ltl property and use that to check it?
from comments: and i want to verify this property, for example If the fifo is full, one should not have a write request, that is the right syntax?full means that fifo is full and wr_idx is the write pointer, I do not know how to access the full, empty, wr_idx, rd_idx, depth on the fifo process in the properties ltl fifo_no_write_when_full {[] (full -> ! wr_idx)}
Here is an example of the process-based FIFO with size 1 that I gave you here adapted for an arbitrary size, which can be configured with FIFO_SIZE. For verification purposes, I would keep this value as small as possible (e.g. 3), because otherwise you are just widening the state space without including any more significant behaviour.
mtype = { PUSH, POP, IS_EMPTY, IS_FULL };
#define PRODUCER_UID 0
#define CONSUMER_UID 1
#define FIFO_SIZE 10
proctype fifo(chan inputs, outputs)
{
mtype command;
int tmp, src_uid;
int data[FIFO_SIZE];
byte head = 0;
byte count = 0;
bool res;
do
:: true ->
inputs?command(tmp, src_uid);
if
:: command == PUSH ->
if
:: count >= FIFO_SIZE ->
outputs!IS_FULL(true, src_uid);
:: else ->
data[(head + count) % FIFO_SIZE] = tmp;
count = count + 1;
outputs!PUSH(data[(head + count - 1) % FIFO_SIZE], src_uid);
fi
:: command == POP ->
if
:: count <= 0 ->
outputs!IS_EMPTY(true, src_uid);
:: else ->
outputs!POP(data[head], src_uid);
atomic {
head = (head + 1) % FIFO_SIZE;
count = count - 1;
}
fi
:: command == IS_EMPTY ->
res = count <= 0;
outputs!IS_EMPTY(res, src_uid);
:: command == IS_FULL ->
res = count >= FIFO_SIZE;
outputs!IS_FULL(res, src_uid);
fi;
od;
}
No change to producer, consumer or init was necessary:
proctype producer(chan inputs, outputs)
{
mtype command;
int v;
do
:: true ->
atomic {
inputs!IS_FULL(false, PRODUCER_UID) ->
outputs?IS_FULL(v, PRODUCER_UID);
}
if
:: v == 1 ->
skip
:: else ->
select(v: 0..16);
printf("P[%d] - produced: %d\n", _pid, v);
access_fifo:
atomic {
inputs!PUSH(v, PRODUCER_UID);
outputs?command(v, PRODUCER_UID);
}
assert(command == PUSH);
fi;
od;
}
proctype consumer(chan inputs, outputs)
{
mtype command;
int v;
do
:: true ->
atomic {
inputs!IS_EMPTY(false, CONSUMER_UID) ->
outputs?IS_EMPTY(v, CONSUMER_UID);
}
if
:: v == 1 ->
skip
:: else ->
access_fifo:
atomic {
inputs!POP(v, CONSUMER_UID);
outputs?command(v, CONSUMER_UID);
}
assert(command == POP);
printf("P[%d] - consumed: %d\n", _pid, v);
fi;
od;
}
init {
chan inputs = [0] of { mtype, int, int };
chan outputs = [0] of { mtype, int, int };
run fifo(inputs, outputs); // pid: 1
run producer(inputs, outputs); // pid: 2
run consumer(inputs, outputs); // pid: 3
}
Now you should have enough material to work on and be ready to write your own properties. On this regard, in your question you write:
I do not know how to access the full, empty, wr_idx, rd_idx, depth on the fifo process in the properties ltl fifo_no_write_when_full {[] (full -> ! wr_idx)}
First of all, please note that in my code rd_idx corresponds to head, depth (should) correspond to count and that I did not use an explicit wr_idx because the latter can be derived from the former two: it is given by (head + count) % FIFO_SIZE. This is not just a choice of code cleanliness, because having fewer variables in a Promela model actually helps with memory consumption and running time of the verification process.
Of course, if you really want to have wr_idx in your model you are free to add it yourself. (:
Second, if you look at the Promela manual for ltl properties, you find that:
The names or symbols must be defined to represent boolean expressions on global variables from the model.
So in other words, it's not possible to put local variables inside an ltl expression. If you want to use them, then you should take them out from the process's local space and put them in the global space.
So, to check fifo_no_write_when_full* you could:
move the declaration of count out in the global space
add a label fifo_write: here:
:: command == PUSH ->
if
:: count >= FIFO_SIZE ->
outputs!IS_FULL(true, src_uid);
:: else ->
fifo_write:
data[(head + count) % FIFO_SIZE] = tmp;
count = count + 1;
outputs!PUSH(data[(head + count - 1) % FIFO_SIZE], src_uid);
fi
check the property:
ltl fifo_no_write_when_full { [] ( (count >= FIFO_SIZE) -> ! fifo#fifo_write) }
Third, before any attempt to verify any of your properties with the usual commands, e.g.
~$ spin -a fifo.pml
~$ gcc -o fifo pan.c
~$ ./fifo -a -N fifo_no_write_when_full
you should modify producer and consumer so that neither of them executes for an indefinite amount of time and therefore keep the search space at a small depth. Otherwise you are likely to get an error of the sort
error: max search depth too small
and have the verification exhaust all of your hardware resources without reaching any sensible conclusion.
*: actually the name fifo_no_write_when_full is quite generic and might have multiple interpretations, e.g.
the fifo does not perform a push when it is full
the producer is not able to push if the fifo is full
In the example I provided I chose to adopt the first interpretation of the property.

Groovy iterating list variables

I am working on a script to collect field names (dialogPartyASelection_* && dialogPartyBSelection_*) and then compare the two values to see check if they match. The full list(selections)of fields is being broken down into 'groups' before comparison. I can break out certain parts (IE, comparing the two field values via iteration) and test successfully, however when bringing everything together the script doesn't seem to compare correctly. I may be approaching this/setting myself up to do this wrong, (have started to toy with creating a map, with party A as the key with party B as value).
Snippet of code:
// Test Variables
PartyBSelection_Propertieshamster = 'Accepted'
PartyBSelection_Propertieszembra = 'Agreed'
PartyBSelection_Propertiesdogs = 'Agreed'
PartyBSelection_Propertiescats = 'Decision taken'
PartyASelection_Propertieshamster = 'Accepted'
PartyASelection_Propertieszembra = 'Agreed'
PartyASelection_Propertiesdogs = 'Agreed'
PartyASelection_Propertiescats = 'Decision taken'
// example of selections(there are lots of entries for A/B party) = ['dialogPartyBSelection_Communication','dialogPartyASelection_Housing','dialogPartyASelection_Income','PartyASelection_Properties']
def selectedGroup = { s -> selections.findAll { it.contains s}} // for pulling groups of questions from list
def isAgreed = { a, b -> (a in ['Agreed', 'Decision taken','Accepted'] && b in ['Agreed', 'Decision taken','Accepted']) } // for comparing values
for(questions in selectedGroup("Properties")){
{k -> percentCompleteProperties += isAgreed("PartyASelection_${k}", "PartyBSelection_${k}")? 1 : 0}
println questions
println percentCompleteProperties
}
Current output:
PartyBSelection_Propertiescats
0
PartyBSelection_Propertieshamster
0
PartyBSelection_Propertiesdogs
0
PartyBSelection_Propertieshamster
0
PartyASelection_Propertiescats
0
PartyASelection_Propertieshamster
0
PartyASelection_Propertiesdogs
0
PartyASelection_Propertieshamster
0
This is sample code.
but i had changed test data etc.
please check whether this logic is correct for your case.
// Test Variables
def testVariables = [
'PartyBSelection_PropertiesAssets':'Accepted',
'PartyBSelection_PropertiesDebts': 'Agreed',
'PartyBSelection_PropertiesMoney': 'Agreed',
'PartyBSelection_PropertiesSpecialGoods':'Decision taken',
'PartyASelection_PropertiesAssets':'Accepted',
'PartyASelection_PropertiesDebts':'Agreed',
'PartyASelection_PropertiesMoney':'Agreed',
'PartyASelection_PropertiesSpecialGoods':'Decision taken'
]
List<String> selections= [
'PropertiesAssets',
'PropertiesDebts',
'PropertiesMoney',
'PropertiesSpecialGoods',
'PropertiesAssets',
'PropertiesDebts',
'PropertiesMoney',
'PropertiesSpecialGoods'
]
def selectedGroup = { s -> selections.findAll { it.contains s}}
List<String> okLabels = ['Agreed', 'Decision taken','Accepted']
def isAgreed = {a, b ->
(testVariables[a] in okLabels && testVariables[b] in okLabels)
}
List<Map<String, Integer>> result = selectedGroup("Properties").collect {String question ->
[(question) : isAgreed("PartyASelection_${question}", "PartyBSelection_${question}")? 1 : 0 ]
}
assert result == [
['PropertiesAssets':1],
['PropertiesDebts':1],
['PropertiesMoney':1],
['PropertiesSpecialGoods':1],
['PropertiesAssets':1],
['PropertiesDebts':1],
['PropertiesMoney':1],
['PropertiesSpecialGoods':1]
]
Other way
def r = [:]
def questionsCount = selectedGroup("Properties").size()
selectedGroup("Properties").eachWithIndex {String question, Integer i ->
println "question:${question}(${i+1}/${questionsCount})"
r.put(question,isAgreed("PartyASelection_${question}", "PartyBSelection_${question}")? 1 : 0 )
}
// This version includes all records in to the one map.
// Also, a record that is duplicated (as key) is overwritten.
r == [PropertiesAssets:1, PropertiesDebts:1, PropertiesMoney:1, PropertiesSpecialGoods:1]
then output:
question:PropertiesAssets(1/8)
question:PropertiesDebts(2/8)
question:PropertiesMoney(3/8)
question:PropertiesSpecialGoods(4/8)
question:PropertiesAssets(5/8)
question:PropertiesDebts(6/8)
question:PropertiesMoney(7/8)
question:PropertiesSpecialGoods(8/8)

Groovier way of manipulating the list

I have two list like this :
def a = [100,200,300]
def b = [30,60,90]
I want the Groovier way of manipulating the a like this :
1) First element of a should be changed to a[0]-2*b[0]
2)Second element of a should be changed to a[1]-4*b[1]
3)Third element of a should be changed to a[2]-8*b[2]
(provided that both a and b will be of same length of 3)
If the list changed to map like this, lets say:
def a1 = [100:30, 200:60, 300:90]
how one could do the same above operation in this case.
Thanks in advance.
For List, I'd go with:
def result = []
a.eachWithIndex{ item, index ->
result << item - ((2**index) * b[index])
}
For Map it's a bit easier, but still requires an external state:
int i = 1
def result = a.collect { k, v -> k - ((2**i++) * v) }
A pity, Groovy doesn't have an analog for zip, in this case - something like zipWithIndex or collectWithIndex.
Using collect
In response to Victor in the comments, you can do this using a collect
def a = [100,200,300]
def b = [30,60,90]
// Introduce a list `c` of the multiplier
def c = (1..a.size()).collect { 2**it }
// Transpose these lists together, and calculate
[a,b,c].transpose().collect { x, y, z ->
x - y * z
}
Using inject
You can also use inject, passing in a map of multiplier and result, then fetching the result out at the end:
def result = [a,b].transpose().inject( [ mult:2, result:[] ] ) { acc, vals ->
acc.result << vals.with { av, bv -> av - ( acc.mult * bv ) }
acc.mult *= 2
acc
}.result
And similarly, you can use inject for the map:
def result = a1.inject( [ mult:2, result:[] ] ) { acc, key, val ->
acc.result << key - ( acc.mult * val )
acc.mult *= 2
acc
}.result
Using inject has the advantage that you don't need external variables declared, but has the disadvantage of being harder to read the code (and as Victor points out in the comments, this makes static analysis of the code hard to impossible for IDEs and groovypp)
def a1 = [100:30, 200:60, 300:90]
a1.eachWithIndex{item,index ->
println item.key-((2**(index+1))*item.value)
i++
}

Resources