Ocaml Thread misunderstanding - multithreading

I only see the whole output after a 5 sec delay, but I think it should be otherwise.
I expect following output:
main is here
hi received
(and only then sleep for 5 sec)
but my code starts by sleeping first for 5 sec and only then continues.
let t1 ch =
let m = sync (receive ch) in
print_string (m ^ " -> received\n");
delay 5.0;
sync (send ch "t1 got the message")
let main () =
let ch = new_channel () in
create t1 ch;
print_string "main is here\n";
sync (send ch "hi");
print_string ("main confirms :" ^ sync(receive ch))
I would gladly read some tutorials online but I didn't find any.

Try flushing the output
print_string "main is here\n";
flush stdout

Related

MailboxProcessor Scan memory leak

Consider the following agent which uses Scan to process every message, with a message posted every millisecond.
open System.Threading
let mp = MailboxProcessor<string>.Start (fun inbox ->
let rec await () = inbox.Scan (fun msg ->
printfn "Received : %s" msg
printfn "Queue length: %i" inbox.CurrentQueueLength
Some <| await ())
await ())
while true do
mp.Post "word"
Thread.Sleep 1
If I monitor the memory usage of the process (macOS, via Activity Monitor), it grows and grows. Yet you can see from the printout that the queue length remains at 0.
Is this a memory leak bug in Scan or am I doing something wrong?

SPIN program using channels - verification gives "missing pars in receive" error though simulation works fine

I have a program that uses channels for inter-process messaging.It is driving me nuts.
When I run my program by typing:
spin ipc_verify.pml
It works fine (shown by the prints in my program) and exits gracefully as designed.
However, when I try to verify by doing the following:
spin -a ipc-verify.pml
gcc -DVECTORSZ=4096 -DVERBOSE -o pan pan.c
./pan
It fails in the first statement in the server where the server is trying to read on the channel, with the error:
pan:1: missing pars in receive (at depth 20)
It seems like I am missing something very simple, but can't put my finger on it. I am new to Spin, doing it as part of my coursework, so please pardon if it is a simple, silly question.
Here is a brief description of the program:
The program starts 3 processes - 1 server and 2 clients. Client sends a number to the server, which responds with the square of the number. There is a request channel on which every client send its request (message has the client id using which server knows which client to respond to), and a response channel on which server sends the response to the clients. Clients use random receive on the channel to find the message for their id.
The code line where I believe it fails is this
:: ch_clientrequest ? msgtype, client_id, client_request ->
I actually have a bigger program that exhibits this behavior so I tried to reproduce it in this program. I read through various ways of seeing more data about from spin about this error, and also googled around. Also tried changing the message structure, more fields, less fields, not doing random receive but regular receive, etc. Nothing seems to change this error!
Here is the full error trace from running ./pan:
pan:1: missing pars in receive (at depth 20)
pan: wrote ipc-verify.pml.trail
(Spin Version 6.5.1 -- 20 December 2019)
Warning: Search not completed
+ Partial Order Reduction
+ FullStack Matching
Full statespace search for:
never claim - (none specified)
assertion violations +
acceptance cycles - (not selected)
invalid end states +
State-vector 2104 byte, depth reached 20, errors: 1
21 states, stored
0 states, matched
0 matches within stack
21 transitions (= stored+matched)
0 atomic steps
hash conflicts: 0 (resolved)
stackframes: 0/0
stats: fa 0, fh 0, zh 0, zn 0 - check 0 holds 0
stack stats: puts 0, probes 0, zaps 0
Stats on memory usage (in Megabytes):
0.043 equivalent memory usage for states (stored*(State-vector + overhead))
1.164 actual memory usage for states
128.000 memory used for hash table (-w24)
0.534 memory used for DFS stack (-m10000)
129.315 total actual memory usage
I have tried to look for what this message at run-time in verification means, but couldn't find much. Based on various experimentation of code, it seems that the verifier thinks that the message I am trying to receive is supposed to have more parameters than what I am trying to read for. I tried to see if it is reacting to the actual message received and maybe that has less fields, but that doesn't seem to be the case.
I have been banging my head on this for full day today, with no leads. Any pointers or ideas to solve this would be very appreciated.
I am running this on my linux box, Spin 6.5.
/*
One hub controller (server), 8 clients.
Each client sends a message to the hub, hub responds with the message it received.
*/
#define N 2 // Number of clients
#define MQLENGTH 100
mtype = {START_CLIENT, COMPUTE_REQUEST, COMPUTE_RESPONSE, STOP_CLIENT, STOP_HUB}
typedef ClientRequest {
byte num;
}
typedef HubResponse {
bool isNull; // To indicate whether there is data or not. Set True for START and STOP messages
int id;
byte num;
int sqnum;
}
typedef IdList {
byte ids[N]; // Use to store the ids assigned to each client process
}
IdList idlist;
chan ch_clientrequest = [MQLENGTH] of {mtype, byte, ClientRequest} // Hub listens to this
chan ch_hubresponse = [MQLENGTH] of {mtype, byte, HubResponse} // Clients read from this
int message_served = 0
proctype Client(byte id) {
// A client reads the message and responds to it
mtype msgtype
HubResponse hub_response
ClientRequest client_request
do
:: ch_hubresponse ?? msgtype, eval(id), hub_response ->
printf("\nClient Id: %d, Received - MsgType: %e", id, msgtype)
if
:: (msgtype == COMPUTE_RESPONSE) ->
// print the message
printf("\nClient Id: %d, Received - num = %d, sqnum = %d", id, hub_response.num, hub_response.sqnum)
// send another message. new num = sqnum
client_request.num = hub_response.sqnum % 256// To keep it as byte
if
:: (client_request.num < 2) ->
client_request.num = 2
:: else ->
skip
fi
ch_clientrequest ! COMPUTE_REQUEST(id, client_request)
printf("\nClient Id: %d, Sent - num = %d", id, client_request.num)
:: (msgtype == STOP_CLIENT) ->
// break from the do loop
break;
:: (msgtype == START_CLIENT) ->
client_request.num = id // Start with num = id
ch_clientrequest ! COMPUTE_REQUEST(id, client_request)
printf("\nClient Id: %d, Sent - num = %d", id, client_request.num)
fi
od
printf("\nClient exiting. Id = %d", id)
}
proctype Hub() {
// Hub sends a start message to each client, and then keeps responding to what it receives
HubResponse hr
ClientRequest client_request
mtype msgtype
byte client_id
int i
byte num
for (i: 0 .. ( N - 1) ) {
// Send a start message
hr.isNull = true
ch_hubresponse ! START_CLIENT(idlist.ids[i], hr) // Send a start message
}
// All of the clients have been started. Now wait for the message and respond appropriately
do
:: ch_clientrequest ? msgtype, client_id, client_request ->
printf("\nHub Controller. Received - MsgType: %e", msgtype)
if
:: (msgtype == COMPUTE_REQUEST) ->
// handle the message
num = client_request.num
hr.isNull = false
hr.id = client_id
hr.num = num
hr.sqnum = num * num
ch_hubresponse ! COMPUTE_RESPONSE(client_id, hr) // Send a response message
message_served ++
:: (msgtype == STOP_HUB) ->
// break from the do loop, send stop message to all clients, and exit
break;
fi
od
// loop through the ids and send stop message
for (i: 0 .. ( N - 1) ) {
// Send a start message
hr.isNull = true
ch_hubresponse ! STOP_CLIENT(idlist.ids[i], hr) // Send a start message
}
printf("\nServer exiting.")
}
active proctype Main() {
// Start the clients and give them an id to use
ClientRequest c
pid n;
n = _nr_pr;
byte i
for (i: 1.. N ) {
run Client(i)
idlist.ids[i-1] = i
}
// Start the hub and give it the list of ids
run Hub()
// Send a message to Hub to stop serving
(message_served >= 100);
ch_clientrequest ! STOP_HUB(0, c)
// Wait for all processes to exit
(n == _nr_pr);
printf("\nAll processes have exited!")
}

Simulating coroutines with POSIX threads (in guile scheme)

I grokked yesterday that mutex with conditional variables are similar to the idea of a coroutine, if the caller thread wait for callee thread signal its execution.
The idea is to have 2 threads with cooperating manner, with mutex representing the "execution lock".
Tried to verify the idea on my favorite scheme. The implementation worked fine until I expand the idea to 2 threads. The threads went slightly out of order when iterations go up to 8000-ish times.
I couldn't really see why sometimes the threads is in wrong order. If they do, the program shouldn't have worked at all, since with all the mutual waiting, a deadlock is supposed to happen IF the program's algorithm is wrong. really interested for an insight.
Here's the code so far:
(use-modules (ice-9 threads))
(define mtx1 (make-mutex))
(define mtx2 (make-mutex))
(define cv1 (make-condition-variable)) ;; cv1: B -> A
(define cv2 (make-condition-variable)) ;; cv2: B -> C
(define cv3 (make-condition-variable)) ;; cv3: A -> B
(define cv4 (make-condition-variable)) ;; cv4: C -> B
(define v 0)
(lock-mutex mtx1) ;; block t1
(lock-mutex mtx2) ;; block t2
(define (B->A)
(signal-condition-variable cv1) ;; signal B -> A is going to happen
(wait-condition-variable cv3 mtx1)) ;; release mtx1 and wait for A -> B
(define (B->C)
(signal-condition-variable cv2) ;; signal B -> C is going to happen
(wait-condition-variable cv4 mtx2)) ;; release mtx2 and wait for C -> B
(define (A->B)
(signal-condition-variable cv3) ;; signal A -> B is going to happen
(wait-condition-variable cv1 mtx1)) ;; release mtx1 and wait for B -> A
(define (C->B)
(signal-condition-variable cv4) ;; signal C -> B is going to happen
(wait-condition-variable cv2 mtx2)) ;; release mtx2 and wait for B -> C
(call-with-new-thread
(lambda ()
(lock-mutex mtx1) ;; wait for B release mtx1
(let A ()
(A->B)
(set! v (+ v 1))
(format #t "A: v=~a~%" v)
(A))))
(call-with-new-thread
(lambda ()
(lock-mutex mtx2) ;; wait for B to release mtx2
(let C ()
(C->B)
(set! v (+ v 1))
(format #t "C: v=~a~%" v)
(C))))
(wait-condition-variable cv3 mtx1) ;; trigger first execution of A, resume by A->B
(wait-condition-variable cv4 mtx2) ;; trigger first execution of C, resume by C->B
(let B ()
(set! v (+ v 1))
(format #t "B: v=~a~%" v)
(B->A)
(B->C)
(B))
and you could use the shell snippet to test the program to see how it go wrong:
for (( i=1 ; ; i+=1 )) do
echo "=== Run $i ==="
MD5_1=$(guile message.scm |tee "/tmp/message_$i.txt" |head -10000 |md5sum)
if [[ $i -gt 1 && "$MD5_2" != "$MD5_1" ]]; then
echo "bug"
break
fi
MD5_2="$MD5_1"
done
I've implemented a equivalent C version. It seems like it's working properly according to the logic!
#include <stdio.h>
#include <pthread.h>
pthread_mutex_t mtx1 = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_t mtx2 = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cv1;
pthread_cond_t cv2;
pthread_cond_t cv3;
pthread_cond_t cv4;
int v = 0;
void BA(void) {
pthread_cond_signal(&cv1);
pthread_cond_wait(&cv3, &mtx1);
}
void AB(void) {
pthread_cond_signal(&cv3);
pthread_cond_wait(&cv1, &mtx1);
}
void BC(void) {
pthread_cond_signal(&cv2);
pthread_cond_wait(&cv4, &mtx2);
}
void CB(void) {
pthread_cond_signal(&cv4);
pthread_cond_wait(&cv2, &mtx2);
}
void *A(void *args) {
pthread_mutex_lock(&mtx1);
for (;;) {
AB();
v += 1;
printf("A: v=%d\n", v);
}
}
void *C(void *args) {
pthread_mutex_lock(&mtx2);
for (;;) {
CB();
v += 1;
printf("C: v=%d\n", v);
}
}
int main() {
pthread_t t1, t2;
pthread_mutex_lock(&mtx1);
pthread_mutex_lock(&mtx2);
pthread_create(&t1, NULL, A, NULL);
pthread_create(&t2, NULL, C, NULL);
pthread_cond_wait(&cv3, &mtx1);
pthread_cond_wait(&cv4, &mtx2);
for (;;) {
v += 1;
printf("B: v=%d\n", v);
BA();
BC();
}
return 0;
}
With the assistance of another C implementation, it implies that guile scheme is not functioning properly.
The C implementation works just as intended.

Why following code generates deadlock

Golang newbie here. Can somebody explain why the following code generates a deadlock?
I am aware of sending true to boolean <- done channel but I don't want to use it.
package main
import (
"fmt"
"sync"
"time"
)
var wg2 sync.WaitGroup
func producer2(c chan<- int) {
for i := 0; i < 5; i++ {
time.Sleep(time.Second * 10)
fmt.Println("Producer Writing to chan %d", i)
c <- i
}
}
func consumer2(c <-chan int) {
defer wg2.Done()
fmt.Println("Consumer Got value %d", <-c)
}
func main() {
c := make(chan int)
wg2.Add(5)
fmt.Println("Starting .... 1")
go producer2(c)
go consumer2(c)
fmt.Println("Starting .... 2")
wg2.Wait()
}
Following is my understanding and I know that it is wrong:
The channel will be blocked the moment 0 is written to it within the
loop of producer function
So I expect channel to be emptied by the
consumer afterwards.
As the channel is emptied in the step 2,
producer function can again put in another value and then get
blocked and steps 2 repeats again.
Your original deadlock is caused by wg2.Add(5), you were waiting for 5 goroutines to finish, but only one did; you called wg2.Done() once. Change this to wg2.Add(1), and your program will run without error.
However, I suspect that you intended to consume all the values in the channel not just one as you do. If you change consumer function to:
func consumer2(c <-chan int) {
defer wg2.Done()
for i := range c {
fmt.Printf("Consumer Got value %d\n", i)
}
}
You will get another deadlock because the channel is not closed in producer function, and consumer is waiting for more values that never arrive. Adding close(c) to the producer function will fix it.
Why it error?
Running your code gets the following error:
➜ gochannel go run dl.go
Starting .... 1
Starting .... 2
Producer Writing to chan 0
Consumer Got value 0
Producer Writing to chan 1
fatal error: all goroutines are asleep - deadlock!
Here is why:
There are three goroutines in your code: main,producer2 and consumer2. When it runs,
producer2 sends a number 0 to the channel
consumer2 recives 0 from the channel, and exits
producer2 sends 1 to the channel, but no one is consuming, since consumer2 already exits
producer2 is waiting
main executes wg2.Wait(), but not all waitgroup are closed. So main is waiting
Two goroutines are waiting here, does nothing, and nothing will be done no matter how long you wait. It is a deadlock! Golang detects it and panic.
There are two concepts you are confused here:
how waitgourp works
how to receive all values from a channel
I'll explain them here briefly, there are alreay many articles out there on the internet.
how waitgroup works
WaitGroup if a way to wait for all groutine to finish. When running goroutines in the background, it's important to know when all of them quits, then certain action can be conducted.
In your case, we run two goroutines, so at the beginning we should set wg2.Add(2), and each goroutine should add wg2.Done() to notify it is done.
Receive data from a channel
When receiving data from a channel. If you know exactly how many data it will send, use for loop this way:
for i:=0; i<N; i++ {
data = <-c
process(data)
}
Otherwise use it this way:
for data := range c {
process(data)
}
Also, Don't forget to close channel when there is no more data to send.
How to fix it?
With the above explanation, the code can be fixed as:
package main
import (
"fmt"
"sync"
"time"
)
var wg2 sync.WaitGroup
func producer2(c chan<- int) {
defer wg2.Done()
for i := 0; i < 5; i++ {
time.Sleep(time.Second * 1)
fmt.Printf("Producer Writing to chan %d\n", i)
c <- i
}
close(c)
}
func consumer2(c <-chan int) {
defer wg2.Done()
for i := range c {
fmt.Printf("Consumer Got value %d\n", i)
}
}
func main() {
c := make(chan int)
wg2.Add(2)
fmt.Println("Starting .... 1")
go producer2(c)
go consumer2(c)
fmt.Println("Starting .... 2")
wg2.Wait()
}
Here is another possible way to fix it.
The expected output
Fixed code gives the following output:
➜ gochannel go run dl.go
Starting .... 1
Starting .... 2
Producer Writing to chan 0
Consumer Got value 0
Producer Writing to chan 1
Consumer Got value 1
Producer Writing to chan 2
Consumer Got value 2
Producer Writing to chan 3
Consumer Got value 3
Producer Writing to chan 4
Consumer Got value 4

F# Async Task Cancellation without Token

I am trying to parse hundreds of C source files to map dozens of software signal variables to the names of physical hardware pins. I am trying to do this asynchronously in F#
IndirectMappedHWIO
|> Seq.map IndirectMapFromFile //this is the function with the regex in it
|> Async.Parallel
|> Async.RunSynchronously
The issue is that I cannot figure out how to pass in a CancellationToken to end my task. Each task is reading around 300 C files so I want to be able to stop the task's execution as soon as the regex matches. I tried using Thread.CurrentThread.Abort() but that does not seem to work. Any ideas on how to pass in a CancellationToken for each task? Or any other way to cancel a task based on a condition?
let IndirectMapFromFile pin =
async {
let innerFunc filename =
use streamReader = new StreamReader (filePath + filename)
while not streamReader.EndOfStream do
try
let line1 = streamReader.ReadLine()
streamReader.ReadLine() |> ignore
let line2 = streamReader.ReadLine()
if(obj.ReferenceEquals(line2, null)) then
Thread.CurrentThread.Abort() //DOES NOT WORK!!
else
let m1 = Regex.Match(line1, #"^.*((Get|Put)\w+).*$");
let m2 = Regex.Match(line2, #"\s*return\s*\((\s*" + pin.Name + "\s*)\);");
if (m1.Success && m2.Success) then
pin.VariableName <- m1.Groups.[1].Value
Thread.CurrentThread.Abort() //DOES NOT WORK!!
else
()
with
| ex -> ()
()
Directory.GetFiles(filePath, "Rte*") //all C source and header files that start with Rte
|> Array.iter innerFunc
}
Asyncs cancel on designated operations, such as on return!, let!, or do!; they don't just kill the thread in any unknown state, which is not generally safe. If you want your asyncs to stop, they could for example:
be recursive and iterate via return!. The caller would provide a CancellationToken to Async.RunSynchronously and catch the resulting OperationCanceledException when the job is done.
check some thread-safe state and decide to stop depending on it.
Note that those are effectively the same thing: the workers who iterate over the data check what is going on and cancel in an orderly fashion. In other words, it is clear when exactly they check for cancellation.
Using async cancellation might result in something like this:
let canceler = new System.Threading.CancellationTokenSource()
let rec worker myParameters =
async {
// do stuff
if amIDone() then canceler.Cancel()
else return! worker (...) }
let workers = (...) |> Async.Parallel
try Async.RunSynchronously(workers, -1, canceler.Token) |> ignore
with :? System.OperationCanceledException -> ()
Stopping from common state could look like this:
let keepGoing = ref true
let rec worker myParameters =
if !keepGoing then
// do stuff
if amIDone() then keepGoing := false
worker (...)
let makeWorker initParams = async { worker initParams }
// make workers
workers |> Async.Parallel |> Async.RunSynchronously |> ignore
If the exact timing of cancellation is relevant, I believe the second method may not be safe, as there may be a delay until other threads see the variable change. This doesn't seem to matter here though.

Resources