What does error: sensing to an uninitialized chan mean in ispin? - model-checking

ispin is generating this message on the progress window (the mid bottom screen on the simulate tab):
Error: sending to an uninitialized chan
The weird thing is that the error message starts to appear in the middle of the simulation (I set the maximum step number to 10000 and the it starts to appear around 6000 steps).
How can this be? does spin somehow lose the chan initialization in the middle of the simulation?
this is initialization of one of the channel I use:
chan VP = [1] of {byte};
and this is the error message during the simulation:

This is a mcve for the the error you are experiencing:
chan c;
init {
c!10;
}
which yields
~$ spin test.pml
Error: sending to an uninitialized chan
timeout
Error: sending to an uninitialized chan
#processes: 1
0: proc 0 (:init::1) test.pml:4 (state 1)
1 process created
It is possible that you forgot to state whether the channel is synchronous or asynchronous, and what kind of messages it should contain. A proper channel declaration should look like this:
chan c = [N] of { type_1, ..., type_M };
where N is larger or equal 1 for any asynchronous channel and 0 otherwise, and type_1, ..., type_M is the list of types (i.e. int, bool) of the fields contained in one message.
For more details, read the documentation.

Related

How should I do that the two receiving processes not to be twice in a row in Promela model?

I am a beginner in the spin. I am trying that the model runs the two receiving processes (function called consumer in the model) alternatively, ie. (consumer 1, consumer 2, consumer 1, consumer 2,...). But when I run this code, my output for 2 consumer processes are showing randomly. Can someone help me?
This is my code I am struggling with.
mtype = {P, C};
mtype turn = P;
chan ch1 = [1] of {bit};
byte current_consumer = 1;
byte previous_consumer;
active [2] proctype Producer()
{`
bit a = 0;
do
:: atomic {
turn == P ->
ch1 ! a;
printf("The producer %d --> sent %d!\n", _pid, a);
a = 1 - a;
turn = C;
}
od
}
active [2] proctype Consumer()
{
bit b;
do
:: atomic{
turn == C ->
current_consumer = _pid;
ch1 ? b;
printf("The consumer %d --> received %d!\n\n", _pid, b);
assert(current_consumer == _pid);
turn = P;
}
od
}
Sample out is as photo
First of all, let me draw your attention to this excerpt of atomic's documentation:
If any statement within the atomic sequence blocks, atomicity is lost, and other processes are then allowed to start executing statements. When the blocked statement becomes executable again, the execution of the atomic sequence can be resumed at any time, but not necessarily immediately. Before the process can resume the atomic execution of the remainder of the sequence, the process must first compete with all other active processes in the system to regain control, that is, it must first be scheduled for execution.
In your model, this is currently not causing any problem because ch1 is a buffered channel (i.e. it has size >= 1). However, any small change in the model could break this invariant.
From the comments, I understand that your goal is to alternate consumers, but you don't really care which producer is sending the data.
To be honest, your model already contains two examples of how processes can alternate with one another:
The Producer/Consumers alternate one another via turn, by assigning a different value each time
The Producer/Consumers alternate one another also via ch1, since this has size 1
However, both approaches are alternating Producer/Consumers rather than Consumers themselves.
One approach I like is message filtering with eval (see docs): each Consumer knows its own id, waits for a token with its own id in a separate channel, and only when that is available it starts doing some work.
byte current_consumer;
chan prod2cons = [1] of { bit };
chan cons = [1] of { byte };
proctype Producer(byte id; byte total)
{
bit a = 0;
do
:: true ->
// atomic is only for printing purposes
atomic {
prod2cons ! a;
printf("The producer %d --> sent %d\n", id, a);
}
a = 1 - a;
od
}
proctype Consumer(byte id; byte total)
{
bit b;
do
:: cons?eval(id) ->
current_consumer = id;
atomic {
prod2cons ? b;
printf("The consumer %d --> received %d\n\n", id, b);
}
assert(current_consumer == id);
// yield turn to the next Consumer
cons ! ((id + 1) % total)
od
}
init {
run Producer(0, 2);
run Producer(1, 2);
run Consumer(0, 2);
run Consumer(1, 2);
// First consumer is 0
cons!0;
}
This model, briefly:
Producers/Consumers alternate via prod2cons, a channel of size 1. This enforces the following behavior: after some producers created a message some consumer must consume it.
Consumers alternate via cons, a channel of size 1 containing a token value indicating which consumer is currently allowed to perform some work. All consumers peek on the contents of cons, but only the one with a matching id is allowed to consume the token and move on. At the end of its turn, the consumer creates a new token with the next id in the chain. Consumers alternate in a round robin fashion.
The output is:
The producer 0 --> sent 0
The consumer 1 --> received 0
The producer 1 --> sent 1
The consumer 0 --> received 1
The producer 1 --> sent 0
The consumer 1 --> received 0
...
The producer 0 --> sent 0
The consumer 1 --> received 0
The producer 0 --> sent 1
The consumer 0 --> received 1
The producer 0 --> sent 0
The consumer 1 --> received 0
The producer 0 --> sent 1
The consumer 0 --> received 1
Notice that producers do not necessarily alternate with one another, whereas consumers do -- as requested.

SPIN program using channels - verification gives "missing pars in receive" error though simulation works fine

I have a program that uses channels for inter-process messaging.It is driving me nuts.
When I run my program by typing:
spin ipc_verify.pml
It works fine (shown by the prints in my program) and exits gracefully as designed.
However, when I try to verify by doing the following:
spin -a ipc-verify.pml
gcc -DVECTORSZ=4096 -DVERBOSE -o pan pan.c
./pan
It fails in the first statement in the server where the server is trying to read on the channel, with the error:
pan:1: missing pars in receive (at depth 20)
It seems like I am missing something very simple, but can't put my finger on it. I am new to Spin, doing it as part of my coursework, so please pardon if it is a simple, silly question.
Here is a brief description of the program:
The program starts 3 processes - 1 server and 2 clients. Client sends a number to the server, which responds with the square of the number. There is a request channel on which every client send its request (message has the client id using which server knows which client to respond to), and a response channel on which server sends the response to the clients. Clients use random receive on the channel to find the message for their id.
The code line where I believe it fails is this
:: ch_clientrequest ? msgtype, client_id, client_request ->
I actually have a bigger program that exhibits this behavior so I tried to reproduce it in this program. I read through various ways of seeing more data about from spin about this error, and also googled around. Also tried changing the message structure, more fields, less fields, not doing random receive but regular receive, etc. Nothing seems to change this error!
Here is the full error trace from running ./pan:
pan:1: missing pars in receive (at depth 20)
pan: wrote ipc-verify.pml.trail
(Spin Version 6.5.1 -- 20 December 2019)
Warning: Search not completed
+ Partial Order Reduction
+ FullStack Matching
Full statespace search for:
never claim - (none specified)
assertion violations +
acceptance cycles - (not selected)
invalid end states +
State-vector 2104 byte, depth reached 20, errors: 1
21 states, stored
0 states, matched
0 matches within stack
21 transitions (= stored+matched)
0 atomic steps
hash conflicts: 0 (resolved)
stackframes: 0/0
stats: fa 0, fh 0, zh 0, zn 0 - check 0 holds 0
stack stats: puts 0, probes 0, zaps 0
Stats on memory usage (in Megabytes):
0.043 equivalent memory usage for states (stored*(State-vector + overhead))
1.164 actual memory usage for states
128.000 memory used for hash table (-w24)
0.534 memory used for DFS stack (-m10000)
129.315 total actual memory usage
I have tried to look for what this message at run-time in verification means, but couldn't find much. Based on various experimentation of code, it seems that the verifier thinks that the message I am trying to receive is supposed to have more parameters than what I am trying to read for. I tried to see if it is reacting to the actual message received and maybe that has less fields, but that doesn't seem to be the case.
I have been banging my head on this for full day today, with no leads. Any pointers or ideas to solve this would be very appreciated.
I am running this on my linux box, Spin 6.5.
/*
One hub controller (server), 8 clients.
Each client sends a message to the hub, hub responds with the message it received.
*/
#define N 2 // Number of clients
#define MQLENGTH 100
mtype = {START_CLIENT, COMPUTE_REQUEST, COMPUTE_RESPONSE, STOP_CLIENT, STOP_HUB}
typedef ClientRequest {
byte num;
}
typedef HubResponse {
bool isNull; // To indicate whether there is data or not. Set True for START and STOP messages
int id;
byte num;
int sqnum;
}
typedef IdList {
byte ids[N]; // Use to store the ids assigned to each client process
}
IdList idlist;
chan ch_clientrequest = [MQLENGTH] of {mtype, byte, ClientRequest} // Hub listens to this
chan ch_hubresponse = [MQLENGTH] of {mtype, byte, HubResponse} // Clients read from this
int message_served = 0
proctype Client(byte id) {
// A client reads the message and responds to it
mtype msgtype
HubResponse hub_response
ClientRequest client_request
do
:: ch_hubresponse ?? msgtype, eval(id), hub_response ->
printf("\nClient Id: %d, Received - MsgType: %e", id, msgtype)
if
:: (msgtype == COMPUTE_RESPONSE) ->
// print the message
printf("\nClient Id: %d, Received - num = %d, sqnum = %d", id, hub_response.num, hub_response.sqnum)
// send another message. new num = sqnum
client_request.num = hub_response.sqnum % 256// To keep it as byte
if
:: (client_request.num < 2) ->
client_request.num = 2
:: else ->
skip
fi
ch_clientrequest ! COMPUTE_REQUEST(id, client_request)
printf("\nClient Id: %d, Sent - num = %d", id, client_request.num)
:: (msgtype == STOP_CLIENT) ->
// break from the do loop
break;
:: (msgtype == START_CLIENT) ->
client_request.num = id // Start with num = id
ch_clientrequest ! COMPUTE_REQUEST(id, client_request)
printf("\nClient Id: %d, Sent - num = %d", id, client_request.num)
fi
od
printf("\nClient exiting. Id = %d", id)
}
proctype Hub() {
// Hub sends a start message to each client, and then keeps responding to what it receives
HubResponse hr
ClientRequest client_request
mtype msgtype
byte client_id
int i
byte num
for (i: 0 .. ( N - 1) ) {
// Send a start message
hr.isNull = true
ch_hubresponse ! START_CLIENT(idlist.ids[i], hr) // Send a start message
}
// All of the clients have been started. Now wait for the message and respond appropriately
do
:: ch_clientrequest ? msgtype, client_id, client_request ->
printf("\nHub Controller. Received - MsgType: %e", msgtype)
if
:: (msgtype == COMPUTE_REQUEST) ->
// handle the message
num = client_request.num
hr.isNull = false
hr.id = client_id
hr.num = num
hr.sqnum = num * num
ch_hubresponse ! COMPUTE_RESPONSE(client_id, hr) // Send a response message
message_served ++
:: (msgtype == STOP_HUB) ->
// break from the do loop, send stop message to all clients, and exit
break;
fi
od
// loop through the ids and send stop message
for (i: 0 .. ( N - 1) ) {
// Send a start message
hr.isNull = true
ch_hubresponse ! STOP_CLIENT(idlist.ids[i], hr) // Send a start message
}
printf("\nServer exiting.")
}
active proctype Main() {
// Start the clients and give them an id to use
ClientRequest c
pid n;
n = _nr_pr;
byte i
for (i: 1.. N ) {
run Client(i)
idlist.ids[i-1] = i
}
// Start the hub and give it the list of ids
run Hub()
// Send a message to Hub to stop serving
(message_served >= 100);
ch_clientrequest ! STOP_HUB(0, c)
// Wait for all processes to exit
(n == _nr_pr);
printf("\nAll processes have exited!")
}

How to safely interact with channels in goroutines in Golang

I am new to go and I am trying to understand the way channels in goroutines work. To my understanding, the keyword range could be used to iterate over a the values of the channel up until the channel is closed or the buffer runs out; hence, a for range c will repeatedly loops until the buffer runs out.
I have the following simple function that adds value to a channel:
func main() {
c := make(chan int)
go printchannel(c)
for i:=0; i<10 ; i++ {
c <- i
}
}
I have two implementations of printchannel and I am not sure why the behaviour is different.
Implementation 1:
func printchannel(c chan int) {
for range c {
fmt.Println(<-c)
}
}
output: 1 3 5 7
Implementation 2:
func printchannel(c chan int) {
for i:=range c {
fmt.Println(i)
}
}
output: 0 1 2 3 4 5 6 7 8
And I was expecting neither of those outputs!
Wanted output: 0 1 2 3 4 5 6 7 8 9
Shouldnt the main function and the printchannel function run on two threads in parallel, one adding values to the channel and the other reading the values up until the channel is closed? I might be missing some fundamental go/thread concept here and pointers to that would be helpful.
Feedback on this (and my understanding to channels manipulation in goroutines) is greatly appreciated!
Implementation 1. You're reading from the channel twice - range c and <-c are both reading from the channel.
Implementation 2. That's the correct approach. The reason you might not see 9 printed is that two goroutines might run in parallel threads. In that case it might go like this:
main goroutine sends 9 to the channel and blocks until it's read
second goroutine receives 9 from the channel
main goroutine unblocks and exits. That terminates whole program which doesn't give second goroutine a chance to print 9
In case like that you have to synchronize your goroutines. For example, like so
func printchannel(c chan int, wg *sync.WaitGroup) {
for i:=range c {
fmt.Println(i)
}
wg.Done() //notify that we're done here
}
func main() {
c := make(chan int)
wg := sync.WaitGroup{}
wg.Add(1) //increase by one to wait for one goroutine to finish
//very important to do it here and not in the goroutine
//otherwise you get race condition
go printchannel(c, &wg) //very important to pass wg by reference
//sync.WaitGroup is a structure, passing it
//by value would produce incorrect results
for i:=0; i<10 ; i++ {
c <- i
}
close(c) //close the channel to terminate the range loop
wg.Wait() //wait for the goroutine to finish
}
As to goroutines vs threads. You shouldn't confuse them and probably should understand the difference between them. Goroutines are green threads. There're countless blog posts, lectures and stackoverflow answers on that topic.
In implementation 1, range reads into channel once, then again in Println. Hence you're skipping over 2, 4, 6, 8.
In both implementations, once the final i (9) has been sent to goroutine, the program exits. Thus goroutine does not have the time to print out 9. To solve it, use a WaitGroup as has been mentioned in the other answer, or a done channel to avoid semaphore/mutex.
func main() {
c := make(chan int)
done := make(chan bool)
go printchannel(c, done)
for i:=0; i<10 ; i++ {
c <- i
}
close(c)
<- done
}
func printchannel(c chan int, done chan bool) {
for i := range c {
fmt.Println(i)
}
done <- true
}
The reason your first implementation only returns every other number is because you are, in effect "taking" from c twice each time the loop runs: first with range, then again with <-. It just happens that you're not actually binding or using the first value taken off the channel, so all you end up printing is every other one.
An alternative approach to your first implementation would be to not use range at all, e.g.:
func printchannel(c chan int) {
for {
fmt.Println(<-c)
}
}
I could not replicate the behavior of your second implementation, on my machine, but the reason for that is that both of your implementations are racy - they will terminate whenever main ends, regardless of what data may be pending in a channel or however many goroutines may be active.
As a closing note, I'd warn you not to think about goroutines as explicitly being "threads", though they have a similar mental model and interface. In a simple program like this it's not at all unlikely that Go might just do it all using a single OS thread.
Your first loop does not work as you have 2 blocking channel receivers and they do not execute at the same time.
When you call the goroutine the loop starts, and it waits for the first value to be sent to the channel. Effectively think of it as <-c .
When the for loop in the main function runs it sends 0 on the Chan. At this point the range c recieves the value and stops blocking the execution of the loop.
Then it is blocked by the reciever at fmt.println(<-c) . When 1 is sent on the second iteration of the loop in main the recieved at fmt.println(<-c) reads from the channel, allowing fmt.println to execute thus finishing the loop and waiting for a value at the for range c .
Your second implementation of the looping mechanism is the correct one.
The reason it exits before printing to 9 is that after the for loop in main finishes the program goes ahead and completes execution of main.
In Go func main is launched as a goroutine itself while executing. Thus when the for loop in main completes it goes ahead and exits, and as the print is within a parallel goroutine that is closed, it is never executed. There is no time for it to print as there is nothing to block main from completing and exiting the program.
One way to solve this is to use wait groups http://www.golangprograms.com/go-language/concurrency.html
In order to get the expected result you need to have a blocking process running in main that provides enough time or waits for confirmation of the execution of the goroutine before allowing the program to continue.

Undeclared variable error when using mtype with Jspin

I am new to Jspin and Promela. I tried to implement the following system:
A home alarm system can be activated and deactivated using a personal ID key or password, after  activation the system enters a waiting period of about 30 seconds, time that allows users to evacuate the  secured area after which the alarm is armed, also when an intrusion is detected the alarm has a built in waiting period or delay of 15 seconds to allow the intruder to enter the password or swipe the card key thus identifying himself, in case that the identification is not made within the allocated 15 seconds the alarm  will go off and will be on until an id card or password is used to deactivate it.
This is the code:
mtype = {sigact, sigdeact};
chan signal = [0] of {mtype};
/*chan syntax for declaring and initializing message passing channels*/
int count;
bool alarm_off = true; /*The initial state of the alarm is off*/
active proctype alarm()
{
off:
if
:: count >= 30 -> atomic {signal!sigdeact; count = 0;alarm_off = false; goto on;}
:: else -> atomic {count++; alarm_off = true; goto off;}
fi;
on:
if
:: count >=15 -> atomic { signal!sigact; count = 0;
alarm_off = false; goto off;}
:: else -> atomic {signal!sigact; alarm_off = true; goto off;}
fi;
pending:
if
:: count >= 30 -> atomic {count = 0; alarm_off = false; goto on;}
:: count < 30 -> atomic {count++; alarm_off = false; goto pending;}
fi;
}
When I run the code with Jspin I get this message:
Error: undeclared variable: sigact
But I declared this in the header.
How can I solve this?
According to the documentation of Promela, you are using mtype correctly.
In fact, I cannot reproduce your error with spin version 6.4.3, so I suspect this is a specific issue of Jspin not being correctly updated.
Unless you want to use spin instead of Jspin, you can try the following work-around, which should work even with Jspin:
#define sigact 0
#define sigdeact 1
chan signal = [0] of {short}; // or bool for only 2 values
...
Since no one ever reads from signal, I assume the system model is incomplete and that more processes will be added later on.
Be aware that, in the following instruction sequence:
atomic { signal!sigdeact; count = 0; alarm_off = false; goto on; }
the atomicity will be temporarily lost by alarm because signal is a synchronous channel (it has size 0) and so another process has to be immediately scheduled for reading the message being sent.
In off state, when count >= 30 you reset count back to 0, set alarm_off = false and then go to state on. In on state, you immediately set alarm_off back to true. Is this intended? It looks like some mistake, perhaps you meant to go to state pending.
By reading the description of your system, it looks like the alarm is missing some kind of input signal. I suspect you are using the signal channel differently from its intended purpose.
Shouldn't the model have some transition from state pending to off, in case the proper personal ID/password is used?

Why following code generates deadlock

Golang newbie here. Can somebody explain why the following code generates a deadlock?
I am aware of sending true to boolean <- done channel but I don't want to use it.
package main
import (
"fmt"
"sync"
"time"
)
var wg2 sync.WaitGroup
func producer2(c chan<- int) {
for i := 0; i < 5; i++ {
time.Sleep(time.Second * 10)
fmt.Println("Producer Writing to chan %d", i)
c <- i
}
}
func consumer2(c <-chan int) {
defer wg2.Done()
fmt.Println("Consumer Got value %d", <-c)
}
func main() {
c := make(chan int)
wg2.Add(5)
fmt.Println("Starting .... 1")
go producer2(c)
go consumer2(c)
fmt.Println("Starting .... 2")
wg2.Wait()
}
Following is my understanding and I know that it is wrong:
The channel will be blocked the moment 0 is written to it within the
loop of producer function
So I expect channel to be emptied by the
consumer afterwards.
As the channel is emptied in the step 2,
producer function can again put in another value and then get
blocked and steps 2 repeats again.
Your original deadlock is caused by wg2.Add(5), you were waiting for 5 goroutines to finish, but only one did; you called wg2.Done() once. Change this to wg2.Add(1), and your program will run without error.
However, I suspect that you intended to consume all the values in the channel not just one as you do. If you change consumer function to:
func consumer2(c <-chan int) {
defer wg2.Done()
for i := range c {
fmt.Printf("Consumer Got value %d\n", i)
}
}
You will get another deadlock because the channel is not closed in producer function, and consumer is waiting for more values that never arrive. Adding close(c) to the producer function will fix it.
Why it error?
Running your code gets the following error:
➜ gochannel go run dl.go
Starting .... 1
Starting .... 2
Producer Writing to chan 0
Consumer Got value 0
Producer Writing to chan 1
fatal error: all goroutines are asleep - deadlock!
Here is why:
There are three goroutines in your code: main,producer2 and consumer2. When it runs,
producer2 sends a number 0 to the channel
consumer2 recives 0 from the channel, and exits
producer2 sends 1 to the channel, but no one is consuming, since consumer2 already exits
producer2 is waiting
main executes wg2.Wait(), but not all waitgroup are closed. So main is waiting
Two goroutines are waiting here, does nothing, and nothing will be done no matter how long you wait. It is a deadlock! Golang detects it and panic.
There are two concepts you are confused here:
how waitgourp works
how to receive all values from a channel
I'll explain them here briefly, there are alreay many articles out there on the internet.
how waitgroup works
WaitGroup if a way to wait for all groutine to finish. When running goroutines in the background, it's important to know when all of them quits, then certain action can be conducted.
In your case, we run two goroutines, so at the beginning we should set wg2.Add(2), and each goroutine should add wg2.Done() to notify it is done.
Receive data from a channel
When receiving data from a channel. If you know exactly how many data it will send, use for loop this way:
for i:=0; i<N; i++ {
data = <-c
process(data)
}
Otherwise use it this way:
for data := range c {
process(data)
}
Also, Don't forget to close channel when there is no more data to send.
How to fix it?
With the above explanation, the code can be fixed as:
package main
import (
"fmt"
"sync"
"time"
)
var wg2 sync.WaitGroup
func producer2(c chan<- int) {
defer wg2.Done()
for i := 0; i < 5; i++ {
time.Sleep(time.Second * 1)
fmt.Printf("Producer Writing to chan %d\n", i)
c <- i
}
close(c)
}
func consumer2(c <-chan int) {
defer wg2.Done()
for i := range c {
fmt.Printf("Consumer Got value %d\n", i)
}
}
func main() {
c := make(chan int)
wg2.Add(2)
fmt.Println("Starting .... 1")
go producer2(c)
go consumer2(c)
fmt.Println("Starting .... 2")
wg2.Wait()
}
Here is another possible way to fix it.
The expected output
Fixed code gives the following output:
➜ gochannel go run dl.go
Starting .... 1
Starting .... 2
Producer Writing to chan 0
Consumer Got value 0
Producer Writing to chan 1
Consumer Got value 1
Producer Writing to chan 2
Consumer Got value 2
Producer Writing to chan 3
Consumer Got value 3
Producer Writing to chan 4
Consumer Got value 4

Resources