Better understanding of F# Hopac library - multithreading

I have started using Hopac as an alternative to Async/TPL and I love it. I understand basic usage, but some aspects are still not clear.
First, could we compare Alt to F# lazy, so that a job inside an Alt is only evaluated on Alt.pick?
Second, is this implementation of AutoResetEvent correct and idiomatic for Hopac?
/// <summary>
/// MSDN: The AutoResetEvent class represents a local wait handle event that resets automatically
/// when signaled, after releasing a single waiting thread. An AutoResetEvent object is automatically
/// reset to non-signaled by the system after a single waiting thread has been released.
/// If no threads are waiting, the event object's state remains signaled.
///
/// Hopac's alternative to http://blogs.msdn.com/b/pfxteam/archive/2012/02/11/10266923.aspx
/// </summary>
type HopacAutoResetEvent (initialState : bool) =
// We will wait on take, and set with send
let setChannel : Ch<unit> = ch()
do if initialState then start <| Ch.send setChannel ()
new() = HopacAutoResetEvent(false)
member this.Wait(timeout:int) : Job<bool> =
let timedOut : Alt<bool> =
((float timeout) |> TimeSpan.FromMilliseconds |> Timer.Global.timeOut)
>>=? fun () -> Job.result false
let signaled = Ch.Alt.take setChannel >>=? fun () -> Job.result true
signaled <|> timedOut
// From docs, important for <|>:
// The given alternatives are processed in a left-to-right order with short-cut evaluation.
// In other words, given an alternative of the form first <|> second, the first alternative
// is first instantiated and, if it is pickable, is committed to and the second alternative
// will not be instantiated at all.
member this.Set() : Job<unit> =
// from MSDN: Also, if Set is called when there are no threads waiting and the EventWaitHandle
// is already signaled, the call has no effect.
// try take and send covers all cases
// if there was no waiters and state was signalled -> will steal the state and send it back immediately
// if there were waiting thread or state was not signaled -> there was no signal and we steal nothing, just signal
(Ch.Try.take setChannel) >>. Ch.send setChannel ()
Third, is this implementation of ManualResetEvent correct and idiomatic for Hopac?
/// <summary>
/// Hopac's alternative to http://blogs.msdn.com/b/pfxteam/archive/2012/02/11/10266920.aspx
/// </summary>
type HopacManualResetEvent (initialState : bool) =
[<VolatileFieldAttribute>]
let mutable state : bool = initialState
let setChannel : MChan<bool> = run <| Multicast.create ()
let lock = Lock.Now.create()
new() = HopacManualResetEvent(false)
member this.Wait() : Job<bool> =
let rec loop () =
job {
if state then return true
else
let! port = Multicast.port setChannel
let! res = (Multicast.recv port) // waiting here
if res then return true
else return! loop ()
}
loop ()
// From Multicast.fsi: **Sends** a message to all of the ports listening to the multicast channel.
// Send must mean the same as in Ch
member this.Set() : Job<unit> =
(Multicast.multicast setChannel true) // there could be no waiters
|>> (fun _ -> state <- true ) // in any case we set the state
>>% () // and return unit
|> (Lock.duringJob lock)
member this.Reset() : Job<unit> =
(Multicast.multicast setChannel false) // (redundant?) if there are takers, res in loop() will be false and loop will iterate
|>> (fun _ -> state <- false ) // in any case we set the state
>>% ()
|> (Lock.duringJob lock)
Cross-post: https://github.com/VesaKarvonen/Hopac/issues/26

Related

Stopping helper goroutine which waits for UDP data without freezing main goroutine in Go

I have following receiver construction based on time.Ticker, which calls method receive() every given constant time interval, controlled by channels (To fetch some useful data from server).
There is a main routine with main application logic and second routine with this receiver logic (defined in Start() method). And there is a following problem: when I'm trying to stop receiver routine from main routine (using defined Stop() method) while receive()waits for data from UDP connection, main routine seems to freeze (I have some GUI there and it doesn't allow to interract). I don't actually understand why is this happening, beacuse channels should allow me to interract with this receiver routine. I need to stop this receiver in given point of time even if it is still waiting for UDP data and main routine should still be able to work. I would be grateful for explaination and some idea how to approach stopping this receiver without freezing main routine.
Greetings
type RtpReceiver struct {
Ticker *time.Ticker
Interval time.Duration
doneCheck chan bool
started bool
// some othe fields
}
func NewRtpReceiver() *RtpReceiver {
// some logic
return &RtpReceiver{
doneCheck: make(chan bool),
started: false,
}
}
func (receiver *RtpReceiver) receive() {
_, _, _ = (*receiver.UdpCon).ReadFrom(receiver.buffer)
// some logic
}
func (receiver *RtpReceiver) Start() {
receiver.started = true
receiver.Ticker = time.NewTicker(receiver.Interval)
go func() {
for {
select {
case <-receiver.doneCheck:
return
case <-receiver.Ticker.C:
receiver.receive()
}
}
}()
}
func (receiver *RtpReceiver) Stop() {
if receiver.started {
receiver.doneCheck <- true
receiver.Ticker.Stop()
receiver.started = false
}
}

How can these sync methods be effectively unit tested?

Based on answers to this question, I feel happy with the simplicity and ease of use of the following two methods for synchronization:
func synchronized(lock: AnyObject, closure: () -> Void) {
objc_sync_enter(lock)
closure()
objc_sync_exit(lock)
}
func synchronized<T>(lock: AnyObject, closure: () -> T) -> T {
objc_sync_enter(lock)
defer { objc_sync_exit(lock) }
return closure()
}
But to be sure they're actually doing what I want, I want to wrap these in piles of unit tests. How can I write unit tests that will effectively test these methods (and show they are actually synchronizing the code)?
Ideally, I'd also like these unit tests to be as simple and as clear as possible. Presumably, this test should be code that, if run outside the synchronization block, would give one set of results, but give an entirely separate set of results inside these synchronized blocks.
Here is a runnable XCTest that verifies the synchronization. If you synchronize delayedAddToArray, it will work, otherwise it will fail.
class DelayedArray:NSObject {
func synchronized(lock: AnyObject, closure: () -> Void) {
objc_sync_enter(lock)
closure()
objc_sync_exit(lock)
}
private var array = [String]()
func delayedAddToArray(expectation:XCTestExpectation) {
synchronized(self) {
let arrayCount = self.array.count
self.array.append("hi")
sleep(5)
XCTAssert(self.array.count == arrayCount + 1)
expectation.fulfill()
}
}
}
func testExample() {
let expectation = self.expectationWithDescription("desc")
let expectation2 = self.expectationWithDescription("desc2")
let delayedArray:DelayedArray = DelayedArray()
// This is an example of a functional test case.
let thread = NSThread(target: delayedArray, selector: "delayedAddToArray:", object: expectation)
let secondThread = NSThread(target: delayedArray, selector: "delayedAddToArray:", object: expectation2)
thread.start()
sleep(1)
secondThread.start()
self.waitForExpectationsWithTimeout(15, handler: nil)
}

MailboxProcessor and interaction with GUI Thread

I've created an agent interacting with GUI by means of SynchronizationContext:
type AsyncWorker(id:int) =
let someEvent = new Event<int * string>()
let errorEvent = new Event<_>()
let syncContext = SynchronizationContext.Current
let f (inbox: MailboxProcessor<_>) =
let rec loop () = async {
let! message = inbox.Receive ()
...
syncContext.RaiseEvent someEvent (id, str)
}
Is there any danger in that? What if I had 20 agents? Are these raised events synchronized? Suppose I have some long time calculating function for this event. Can I be sure that the other agents' event handlers will be waiting for its termination?

Scala synchronized consumer producer

I want to implement something like the producer-consumer problem (with only one information transmitted at a time), but I want the producer to wait for someone to take his message before leaving.
Here is an example that doesn't block the producer but works otherwise.
class Channel[T]
{
private var _msg : Option[T] = None
def put(msg : T) : Unit =
{
this.synchronized
{
waitFor(_msg == None)
_msg = Some(msg)
notifyAll
}
}
def get() : T =
{
this.synchronized
{
waitFor(_msg != None)
val ret = _msg.get
_msg = None
notifyAll
return ret
}
}
private def waitFor(b : => Boolean) =
while(!b) wait
}
How can I changed it so the producers gets blocked (as the consumer is) ?
I tried to add another waitFor at the end of but sometimes my producer doesn't get released.
For instance, if I have put ; get || get ; put, most of the time it works, but sometimes, the first put is not terminated and the left thread never even runs the get method (I print something once the put call is terminated, and in this case, it never gets printed).
This is why you should use a standard class, SynchronousQueue in this case.
If you really want to work through your problematic code, start by giving us a failing test case or a stack trace from when the put is blocking.
You can do this by means of a BlockingQueue descendant whose producer put () method creates a semaphore/event object that is queued up with the passed message and then the producer thread waits on it.
The consumer get() method extracts a message from the queue and signals its semaphore, so allowing its original producer to run on.
This allows a 'synchronous queue' with actual queueing functionality, should that be what you want?
I came up with something that appears to be working.
class Channel[T]
{
class Transfer[A]
{
protected var _msg : Option[A] = None
def msg_=(a : A) = _msg = Some(a)
def msg : A =
{
// Reading the message destroys it
val ret = _msg.get
_msg = None
return ret
}
def isEmpty = _msg == None
def notEmpty = !isEmpty
}
object Transfer {
def apply[A](msg : A) : Transfer[A] =
{
var t = new Transfer[A]()
t.msg = msg
return t
}
}
// Hacky but Transfer has to be invariant
object Idle extends Transfer[T]
protected var offer : Transfer[T] = Idle
protected var request : Transfer[T] = Idle
def put(msg : T) : Unit =
{
this.synchronized
{
// push an offer as soon as possible
waitFor(offer == Idle)
offer = Transfer(msg)
// request the transfer
requestTransfer
// wait for the transfer to go (ie the msg to be absorbed)
waitFor(offer isEmpty)
// delete the completed offer
offer = Idle
notifyAll
}
}
def get() : T =
{
this.synchronized
{
// push a request as soon as possible
waitFor(request == Idle)
request = new Transfer()
// request the transfer
requestTransfer
// wait for the transfer to go (ie the msg to be delivered)
waitFor(request notEmpty)
val ret = request.msg
// delete the completed request
request = Idle
notifyAll
return ret
}
}
protected def requestTransfer()
{
this.synchronized
{
if(offer != Idle && request != Idle)
{
request.msg = offer.msg
notifyAll
}
}
}
protected def waitFor(b : => Boolean) =
while(!b) wait
}
It has the advantage of respecting symmetry between producer and consumer but it is a bit longer than what I had before.
Thanks for your help.
Edit : It is better but still not safeā€¦

Thread-safe raising of F# events

I'm trying to do an F# async computation that calls an C# callback when ready. The code is the following:
type Worker() =
let locker = obj()
let computedValue = ref None
let started = ref false
let completed = Event<_>()
let doNothing() = ()
member x.Compute(callBack:Action<_>) =
let workAlreadyStarted, action =
lock locker (fun () ->
match !computedValue with
| Some value ->
true, (fun () -> callBack.Invoke value)
| None ->
completed.Publish.Add callBack.Invoke
if !started then
true, doNothing
else
started := true
false, doNothing)
action()
if not workAlreadyStartedthen
async {
// heavy computation to calc result
let result = "result"
lock locker (fun () ->
computedValue := Some result
completed.Trigger result)
} |> Async.Start
But there's a problem, I want to trigger the completed event outside the lock, but I want to make sure that the triggering is thread safe (Actually, in this small example I could just trigger the event outside the lock as I know no one else will subscribe to it, but that's not always the case).
In C# events this is very easy to accomplish:
object locker = new object();
event Action<string> MyEvent;
void Raise()
{
Action<string> myEventCache;
lock (locker)
{
myEventCache = MyEvent;
}
if (myEventCache != null)
{
myEventCache("result");
}
}
How can I do the equivalent with F# events, freezing the list of subscribers inside the lock but invoking it outside the lock?
This isn't as straightforward in F# because Event<_> doesn't expose its subscriber list, which is mutated by Add/Remove.
You can avoid this mutation by creating a new event for each handler.
let mutable completed = Event<_>()
//...
let ev = Event<_>()
let iev = ev.Publish
iev.Add(completed.Trigger)
iev.Add(callBack.Invoke)
completed <- ev
//...
let ev = lock locker <| fun () ->
computedValue := Some result
completed
ev.Trigger(result)

Resources