Forcing InProgress state machines to always resolve - alloy

I am trying to model executions of concurrent AWS Lambda functions to look for bugs around timing. In my model, once a Lambda is triggered, it should always end up in either a Complete or Failed state. The problem is, I can't figure out how to tell Alloy to throw out any state sequence where Lambdas are still InProgress in the final state.
Here is what I have. In the actual model, there would be side effects of the state transitions which I would want to make assertions about. Basically "once every lambda is complete or failed, X should be impossible".
abstract sig State {}
sig InProgress, Complete, Failed extends State {}
var sig Lambda {
var state: one State,
} {
this in Lambda'
}
pred complete[l:Lambda] {
l.state' in Complete + Failed
}
pred can_step[l:Lambda] {
l.state = InProgress
}
pred wait[l:Lambda] {
l.state' = l.state
}
pred step_lambdas[] {
all l: Lambda {
can_step[l] implies (complete[l] or wait[l]) else wait[l]
}
lone l: Lambda | complete[l] // only let one lambda complete at at time
}
pred trigger_lambda[] {
one l: Lambda' | Lambda' = Lambda + l' and l.state' = InProgress
}
pred do_nothing[] {
Lambda != none
Lambda = Lambda'
}
fact {
Lambda = none
always trigger_lambda or do_nothing
always step_lambdas
}
check {
eventually {
some Lambda
no l: Lambda | can_step[l]
}
}
Update #1: I tried changing the check statement to the below. I believe it should show me a violation with t0, t1, t2 where at t2 there is a Lambda with state = Error. Instead, it shows me a violation with t0, t1 where at t1 there is a Lambda with state = InProgress. I don't see how this violates the check.
...
check {
some Lambda
(all l:Lambda | not can_step[l]) implies all l:Lambda | l.state = Complete
}

I figured it out. I need to express the check in the form "All lambdas must always be able to step or be in a completed state."
check {
always all l:Lambda | can_step[l] or l.state in Complete
}
Complete model which finds a counterexample where a lambda ends in Failed state.
abstract sig State {}
sig InProgress, Complete, Failed extends State {}
var sig Lambda {
var state: one State,
} {
this in Lambda'
}
pred complete[l:Lambda] {
l.state' in Complete + Failed
}
pred can_step[l:Lambda] {
l.state = InProgress
}
pred wait[l:Lambda] {
l.state' = l.state
}
pred step_lambdas[] {
all l: Lambda {
can_step[l] implies (complete[l] or wait[l]) else wait[l]
}
lone l: Lambda | complete[l] // only let one lambda complete at at time
}
pred trigger_lambda[] {
one l: Lambda' | Lambda' = Lambda + l' and l.state' = InProgress
}
pred do_nothing[] {
Lambda != none
Lambda = Lambda'
}
fact {
Lambda = none
always trigger_lambda or do_nothing
always step_lambdas
}
check {
always all l:Lambda | can_step[l] or l.state in Complete
}

Related

Thread safe access to a variable in a class

in an application where there could be multiple threads running, and not sure about the possibilities if these methods will be accessed under a multhreaded environment or not but to be safe, I've done a test class to demonstrate a situation.
One method has was programmed to be thread safe (please also comment if it's done right) but the rest were not.
In a situation like this, where there is only one single line of code inside remove and add, is it necessary to make them thread safe or is it going to be exaggeration.
import Foundation
class Some {}
class Test {
var dict = [String: Some]()
func has(key: String) -> Bool {
var has = false
dispatch_sync(dispatch_queue_create("has", nil), { [unowned self] in
has = self.dict[key] != nil
})
return has
}
func remove(key: String) -> Some {
var ob = dict[key]
dict[key] = nil
return ob
}
func add(key: String, ob: Some) {
dict[key] = ob
}
}
Edit after comments
class Some {}
class Test {
var dict = [String: Some]()
private let queue: dispatch_queue_t = dispatch_queue_create("has", DISPATCH_QUEUE_CONCURRENT)
func has(key: String) -> Bool {
var has = false
dispatch_sync(queue) {
has = self.dict[key] != nil
}
return has
}
func remove(key: String) -> Some? { //returns
var removed: Some?
dispatch_barrier_sync(queue) {
removed = self.dict.removeValueForKey(key)
}
return removed
}
func add(key: String, ob: Some) { //not async
dispatch_barrier_sync(queue) {
self.dict[key] = ob
}
}
}
The way you are checking whether a key exists is incorrect. You are creating a new queue every time, which means the operations are not happening synchronously.
The way I would do it is like so:
class Some {}
class Test {
var dict = [String: Some]()
private let queue: dispatch_queue_t = dispatch_queue_create("has", DISPATCH_QUEUE_CONCURRENT)
func has(key: String) -> Bool {
var has = false
dispatch_sync(queue) { [weak self] in
guard let strongSelf = self else { return }
has = strongSelf.dict[key] != nil
}
return has
}
func remove(key: String) {
dispatch_barrier_async(queue) { [weak self] in
guard let strongSelf = self else { return }
strongSelf.dict[key] = nil
}
}
func add(key: String, ob: Some) {
dispatch_barrier_async(queue) { [weak self] in
guard let strongSelf = self else { return }
strongSelf.dict[key] = ob
}
}
}
Firstly, I am creating a serial queue that is going to be used to access the dictionary as a property of the object, rather than creating a new one every time. The queue is private as it is only used internally.
When I want to get a value out of the class, I am just dispatching a block synchronously to the queue and waits for the block to finish before returning whether or not the queue exists. Since this is not mutating the dictionary, it is safe for multiple blocks of this sort to run on the concurrent queue.
When I want to add or remove values from the dictionary, I am adding the block to the queue but with a barrier. What this does is that it stops all other blocks on the queue while it is running. When it is finished, all the other blocks can run concurrently. I am using an async dispatch, because I don't need to wait for a return value.
Imagine you have multiple threads trying to see whether or not key values exist or adding or removing values. If you have lots of reads, then they happen concurrently, but when one of the blocks is run that will change the dictionary, all other blocks wait until this change is completed and then start running again.
In this way, you have the speed and convenience of running concurrently when getting values, and the thread safety of blocking while the dictionary is being mutated.
Edited to add
self is marked as weak in the block so that it doesn't create a reference cycle. As #MartinR mentioned in the comments; it is possible that the object is deallocated while blocks are still in the queue, If this happens then self is undefined, and you'll probably get a runtime error trying to access the dictionary, as it may also be deallocated.
By setting declaring self within the block to be weak, if the object exists, then self will not be nil, and can be conditionally unwrapped into strongSelf which points to self and also creates a strong reference, so that self will not be deallocated while the instructions in the block are carried out. When these instructions complete, strongSelf will go out of scope and release the strong reference to self.
This is sometimes known as the "strong self, weak self dance".
Edited Again : Swift 3 version
class Some {}
class Test {
var dict = [String: Some]()
private let queue = DispatchQueue(label: "has", qos: .default, attributes: .concurrent)
func has(key: String) -> Bool {
var has = false
queue.sync { [weak self] in
guard let strongSelf = self else { return }
has = strongSelf.dict[key] != nil
}
return has
}
func remove(key: String) {
queue.async(flags: .barrier) { [weak self] in
guard let strongSelf = self else { return }
strongSelf.dict[key] = nil
}
}
func add(key: String, ob: Some) {
queue.async(flags: .barrier) { [weak self] in
guard let strongSelf = self else { return }
strongSelf.dict[key] = ob
}
}
}
Here is another swift 3 solution which provides thread-safe access to AnyObject.
It allocates recursive pthread_mutex associated with 'object' if needed.
class LatencyManager
{
private var latencies = [String : TimeInterval]()
func set(hostName: String, latency: TimeInterval) {
synchronizedBlock(lockedObject: latencies as AnyObject) { [weak self] in
self?.latencies[hostName] = latency
}
}
/// Provides thread-safe access to given object
private func synchronizedBlock(lockedObject: AnyObject, block: () -> Void) {
objc_sync_enter(lockedObject)
block()
objc_sync_exit(lockedObject)
}
}
Then you can call for example set(hostName: "stackoverflow.com", latency: 1)
UPDATE
You can simply define a method in a swift file (not in a class):
/// Provides thread-safe access to given object
public func synchronizedAccess(to object: AnyObject, _ block: () -> Void)
{
objc_sync_enter(object)
block()
objc_sync_exit(object)
}
And use it like this:
synchronizedAccess(to: myObject) {
myObject.foo()
}

applying future where can be exception comes

this code works fine but i want to manage threads, by Future.
sendSMS method takes normally 3 to 5 seconds to execute, i want to applying future and applied at one place but want to know is it enough or not?
val c = for {
t <- Future { doSendSms("+9178787878787","i scare with threads") }
} yield t
c.map { res =>
res match {
case e: Error => {
Ok(write(Map("result" -> "error")))
}
case Success() => {
Ok(write(Map("result" -> "success")))
}
def doSendSms(recipient: String, body: String): SentSmsResult = {
try {
sendSMS(recipient, body)
Success()
} catch {
case twilioEx: TwilioRestException =>
return Error(twilioEx.toString)
case e: Exception =>
return Error(e.toString)
}
}
def sendSMS(smsTo: String, body: String) = {
val params = Map("To" -> smsTo, "From" -> twilioNumber, "Body" -> body)
val messageFactory = client.getAccount.getSmsFactory
messageFactory.create(params)
}// sending sms from twilio, this method takes 3 to 5 seconds to execute
if not how to manage Future in this code
I would use recover:
val c = for {
t <- doSendSms("+9178787878787","i scare with threads")
} yield t
def doSendSms(recipient: String, body: String): Future[SentSmsResult] =
Future {
sendSMS(recipient, body)
}
.recover {
case twilioEx: TwilioRestException => Error(twilioEx.toString)
case e: Exception => Error(e.toString)
}
}
recover will catch exceptions thrown in the future execution allowing you to return a new result wrapped in a Future, as the documentation states:
The recover combinator creates a new future which holds the same result as the original future if it completed successfully. If it did not then the partial function argument is applied to the Throwable which failed the original future.

How to gracefully implement EventEmitter in Swift

I am writing a hiredis binding to Swift and working on the async API part.
I would like to have something similar to EventEmitter in Node.js.
objectToBeListened.on('event', (data) => { ... })
objectToBeListened.emit('event')
Namely I hope only one "on" and one "emit" function for every class I have.
I currently use enum for all event types and switch in "on" function. An extra struct which stores all callback functions is introduced.
I could not implement an universal "emit" function: I just glanced the Generics part of Swift. But is it ever possible? It seems that Swift doesn't have variadic template.
Anyway, my prototype code is really ugly and hard to maintain. Is there any better way to implement an EventEmitter gracefully?
class EEProto {
var A: Int
var B: Double
typealias EventChangeA = (Int, Int) -> Void
typealias EventChangeB = (Double, Double) -> Void
typealias EventChanged = () -> Void
struct RegisteredEvent {
var eventChangeA: EventChangeA[]
var eventChangeB: EventChangeB[]
var eventChanged: EventChanged[]
}
enum EventType {
case changeA(EventChangeA[])
case changeB(EventChangeB[])
case changed(EventChanged[])
}
var registeredEvents: RegisteredEvent
init (A: Int, B: Double) {
self.A = A
self.B = B
registeredEvents = RegisteredEvent(eventChangeA: [], eventChangeB: [], eventChanged: [])
}
func on (event: EventType) {
switch event {
case .changeA(let events):
registeredEvents.eventChangeA += events
case .changeB(let events):
registeredEvents.eventChangeB += events
case .changed(let events):
registeredEvents.eventChanged += events
default:
assert("unhandled event type | check your code")
break
}
}
func resetEvents (eventType: EventType) {
switch eventType {
case .changeA:
registeredEvents.eventChangeA = []
case .changeB:
registeredEvents.eventChangeA = []
case .changed:
registeredEvents.eventChangeA = []
default:
assert("unhandled event type | check your code")
break
}
}
func setA (newA: Int) {
let oldA = A
A = newA
for cb in registeredEvents.eventChangeA {
cb(oldA, newA)
}
for cb in registeredEvents.eventChanged {
cb()
}
}
func setB (newB: Double) {
let oldB = B
B = newB
for cb in registeredEvents.eventChangeB {
cb(oldB, newB)
}
for cb in registeredEvents.eventChanged {
cb()
}
}
}
var inst = EEProto(A: 10, B: 5.5)
inst.on(EEProto.EventType.changeA([{
println("from \($0) to \($1)")
}]))
inst.on(EEProto.EventType.changeB([{
println("from \($0) to \($1)")
}]))
inst.on(EEProto.EventType.changed([{
println("value changed")
}]))
inst.setA(10)
inst.setB(3.14)
You can use a library like FlexEmit. It works very similar to the EventEmitter in NodeJS.
Basically you define your events as swift types (these can be any struct, enum, class, etc.):
struct EnergyLevelChanged {
let newEnergyLevel: Int
init(to newValue: Int) { newEnergyLevel = newValue }
}
struct MovedTo {
let x, y: Int
}
Then you create an emitter and add event listeners for different types of events you want to listen for:
let eventEmitter = Emitter()
eventEmitter.when { (newLocation: MovedTo) in
print("Moved to coordinates \(newLocation.x):\(newLocation.y)")
}
eventEmitter.when { (event: EnergyLevelChanged) in
print("Changed energy level to", event.newEnergyLevel)
}
And finally you send your events using a simple emit function
eventEmitter.emit(EnergyLevelChanged(to: 60)) // prints "Changed energy level to 60"
eventEmitter.emit(MovedTo(x: 0, y: 0)) // prints "Moved to coordinates 0:0"

Scala synchronized consumer producer

I want to implement something like the producer-consumer problem (with only one information transmitted at a time), but I want the producer to wait for someone to take his message before leaving.
Here is an example that doesn't block the producer but works otherwise.
class Channel[T]
{
private var _msg : Option[T] = None
def put(msg : T) : Unit =
{
this.synchronized
{
waitFor(_msg == None)
_msg = Some(msg)
notifyAll
}
}
def get() : T =
{
this.synchronized
{
waitFor(_msg != None)
val ret = _msg.get
_msg = None
notifyAll
return ret
}
}
private def waitFor(b : => Boolean) =
while(!b) wait
}
How can I changed it so the producers gets blocked (as the consumer is) ?
I tried to add another waitFor at the end of but sometimes my producer doesn't get released.
For instance, if I have put ; get || get ; put, most of the time it works, but sometimes, the first put is not terminated and the left thread never even runs the get method (I print something once the put call is terminated, and in this case, it never gets printed).
This is why you should use a standard class, SynchronousQueue in this case.
If you really want to work through your problematic code, start by giving us a failing test case or a stack trace from when the put is blocking.
You can do this by means of a BlockingQueue descendant whose producer put () method creates a semaphore/event object that is queued up with the passed message and then the producer thread waits on it.
The consumer get() method extracts a message from the queue and signals its semaphore, so allowing its original producer to run on.
This allows a 'synchronous queue' with actual queueing functionality, should that be what you want?
I came up with something that appears to be working.
class Channel[T]
{
class Transfer[A]
{
protected var _msg : Option[A] = None
def msg_=(a : A) = _msg = Some(a)
def msg : A =
{
// Reading the message destroys it
val ret = _msg.get
_msg = None
return ret
}
def isEmpty = _msg == None
def notEmpty = !isEmpty
}
object Transfer {
def apply[A](msg : A) : Transfer[A] =
{
var t = new Transfer[A]()
t.msg = msg
return t
}
}
// Hacky but Transfer has to be invariant
object Idle extends Transfer[T]
protected var offer : Transfer[T] = Idle
protected var request : Transfer[T] = Idle
def put(msg : T) : Unit =
{
this.synchronized
{
// push an offer as soon as possible
waitFor(offer == Idle)
offer = Transfer(msg)
// request the transfer
requestTransfer
// wait for the transfer to go (ie the msg to be absorbed)
waitFor(offer isEmpty)
// delete the completed offer
offer = Idle
notifyAll
}
}
def get() : T =
{
this.synchronized
{
// push a request as soon as possible
waitFor(request == Idle)
request = new Transfer()
// request the transfer
requestTransfer
// wait for the transfer to go (ie the msg to be delivered)
waitFor(request notEmpty)
val ret = request.msg
// delete the completed request
request = Idle
notifyAll
return ret
}
}
protected def requestTransfer()
{
this.synchronized
{
if(offer != Idle && request != Idle)
{
request.msg = offer.msg
notifyAll
}
}
}
protected def waitFor(b : => Boolean) =
while(!b) wait
}
It has the advantage of respecting symmetry between producer and consumer but it is a bit longer than what I had before.
Thanks for your help.
Edit : It is better but still not safeā€¦

Calling Thread.sleep inside an actor

I have a function retry which basically looks like this (simplified):
object SomeObject {
def retry[T](n: Int)(fn: => T): Option[T] = {
val res = try {
Some(fn)
} catch {
case _: Exception => None
}
res match {
case Some(x) => Some(x)
case None =>
if (n > 1)
//make it sleep for a little while
retry(n - 1)(fn)
else None
}
}
}
I need to make some pause between the attempts. As I was told, it's not acceptable to call Thread.sleep(123) inside an actor:
class MyActor extends Actor {
//......
def someFunc = {
Thread.sleep(456) // it's not acceptable in an actor, there is another way to do it
}
}
Obviously, I don't know whether or not a client will use SomeObject.retry inside an actor:
class MyActor extends Actor {
//......
def someFunc = {
SomeObject.retry(5)(someRequestToServer) // ops, SomeObject.retry uses Thread.sleep!
}
}
So if I just add:
res match {
case Some(x) => Some(x)
case None =>
if (n > 1)
//make it sleep for a little while
Thread.sleep(123) // ops, what if it's being called inside an actor by a client?!
retry(n - 1)(fn)
else None
}
}
it won't be sensible, will it? If not, what do I do?
Yes, calling Thread.sleep is a bad idea as in an actor system threads are normally a limited resource shared between Actors. You do not want an Actor calling sleep and hogging a Thread from other Actors.
What you should do instead is use the Scheduler (see docs) to have your actor sent a message to itself sometime in the future to retry. To do this, you would have to move the retry code out of SomeObject and into the Actor
class MyActor extends Actor {
import context.system.dispatcher
def receive = {
case DoIt(retries) if retries > 0 =>
SomeObject.attempt(someRequestToServer) match {
case Some(x) => ...
case None =>
context.system.scheduler.scheduleOnce(5.seconds, self, DoIt(retries - 1))
}
}
}
Then if you were using SomeObject.try outside of an Actor System
def attempt(retries: Int) = {
SomeObject.attempt(someRequestToServer) match {
case Some(x) => ...
case None if retries > 0 => {
Thread.sleep(123)
attempt(retries - 1)
}
}
}
Where SomeObject.attempt is:
object SomeObject {
def attempt[T](fn: => T): Option[T] =
try {
Some(fn)
} catch {
case _: Exception => None
}
}

Resources