libwebsockets is able to integrate in external event loops, but on its own default uses "poll". How do we add a POLLING watcher of a file with a descriptor.
The default loop looks like:
while (n >= 0 && !interrupted){
n = lws_service(context, 0);
}
do we:
while (n >= 0 && !interrupted){
n = lws_service(context, 0);
poll(fd_struct, POLLING, timeout);
}
can, timeout = 0
Thanks for all the help
Related
Ok, so I'm trying to model a CLH-RW lock in Promela.
The way the lock works is simple, really:
The queue consists of a tail, to which both readers and writers enqueue a node containing a single bool succ_must_wait they do so by creating a new node and CAS-ing it with the tail.
The tail thereby becomes the node's predecessor, pred.
Then they spin-wait on pred.succ_must_wait until it is false.
Readers first increment a reader counter ncritR and then set their own flag to false, allowing multiple readers at in the critical section at the same time. Releasing a readlock simply means decrementing ncritR again.
Writers wait until ncritR reaches zero, then enter the critical section. They do not set their flag to false until the lock is released.
I'm kind of struggling to model this in promela, though.
My current attempt (see below) tries to make use of arrays, where each node basically consists of a number of array entries.
This fails because let's say A enqueues itself, then B enqueues itself. Then the queue will look like this:
S <- A <- B
Where S is a sentinel node.
The problem now is, that when A runs to completeness and re-enqueues, the queue will look like
S <- A <- B <- A'
In actual execution, this is absolutely fine because A and A' are distinct node objects. And since A.succ_must_wait will have been set to false when A first released the lock, B will eventually make progress, and therefore A' will eventually make progress.
What happens in the array-based promela model below, though, is that A and A' occupy the same array positions, causing B to miss the fact that A has released the lock, thereby creating a deadlock where B is (wrongly) waiting for A' instead of A and A' is waiting (correctly) for B.
A possible "solution" to this could be to have A wait until B acknowledges the release. But that would not be true to how the lock works.
Another "solution" would be to wait for a CHANGE in pred.succ_must_wait, where a release would increment succ_must_wait, rather than reset it to 0.
But I'm intending to model a version of the lock, where pred may change (i.e. where a node may be allowed to disregard some of its predecessors), and I'm not entirely convinced something like the increasing version wouldn't cause an issue with this change.
So what's the "smartest" way to model an implicit queue like this in promela?
/* CLH-RW Lock */
/*pid: 0 = init, 1-2 = reader, 3-4 = writer*/
ltl liveness{
([]<> reader[1]#progress_reader)
&& ([]<> reader[2]#progress_reader)
&& ([]<> writer[3]#progress_writer)
&& ([]<> writer[4]#progress_writer)
}
bool initialised = 0;
byte ncritR;
byte ncritW;
byte tail;
bool succ_must_wait[5]
byte pred[5]
init{
assert(_pid == 0);
ncritR = 0;
ncritW = 0;
/*sentinel node*/
tail =0;
pred[0] = 0;
succ_must_wait[0] = 0;
initialised = 1;
}
active [2] proctype reader()
{
assert(_pid >= 1);
(initialised == 1)
do
:: else ->
succ_must_wait[_pid] = 1;
atomic {
pred[_pid] = tail;
tail = _pid;
}
(succ_must_wait[pred[_pid]] == 0)
ncritR++;
succ_must_wait[_pid] = 0;
atomic {
/*freeing previous node for garbage collection*/
pred[_pid] = 0;
}
/*CRITICAL SECTION*/
progress_reader:
assert(ncritR >= 1);
assert(ncritW == 0);
ncritR--;
atomic {
/*necessary to model the fact that the next access creates a new queue node*/
if
:: tail == _pid -> tail = 0;
:: else ->
fi
}
od
}
active [2] proctype writer()
{
assert(_pid >= 1);
(initialised == 1)
do
:: else ->
succ_must_wait[_pid] = 1;
atomic {
pred[_pid] = tail;
tail = _pid;
}
(succ_must_wait[pred[_pid]] == 0)
(ncritR == 0)
atomic {
/*freeing previous node for garbage collection*/
pred[_pid] = 0;
}
ncritW++;
/* CRITICAL SECTION */
progress_writer:
assert(ncritR == 0);
assert(ncritW == 1);
ncritW--;
succ_must_wait[_pid] = 0;
atomic {
/*necessary to model the fact that the next access creates a new queue node*/
if
:: tail == _pid -> tail = 0;
:: else ->
fi
}
od
}
First of all, a few notes:
You don't need to initialize your variables to 0, since:
The default initial value of all variables is zero.
see the docs.
You don't need to enclose a single instruction inside an atomic {} statement, since any elementary statement is executed atomically. For better efficiency of the verification process, whenever possible, you should use d_step {} instead. Here you can find a related stackoverflow Q/A on the topic.
init {} is guaranteed to have _pid == 0 when one of the two following conditions holds:
no active proctype is declared
init {} is declared before any other active proctype appearing in the source code
Active Processes, includig init {}, are spawned in order of appearance inside the source code. All other processes are spawned in order of appearance of the corresponding run ... statement.
I identified the following issues on your model:
the instruction pred[_pid] = 0 is useless because that memory location is only read after the assignment pred[_pid] = tail
When you release the successor of a node, you set succ_must_wait[_pid] to 0 only and you don't invalidate the node instance onto which your successor is waiting for. This is the problem that you identified in your question, but was unable to solve. The solution I propose is to add the following code:
pid j;
for (j: 1..4) {
if
:: pred[j] == _pid -> pred[j] = 0;
:: else -> skip;
fi
}
This should be enclosed in an atomic {} block.
You correctly set tail back to 0 when you find that the node that has just left the critical section is also the last node in the queue. You also correctly enclose this operation in an atomic {} block. However, it may happen that --when you are about to enter this atomic {} block-- some other process --who was still waiting in some idle state-- decides to execute the initial atomic block and copies the current value of tail --which corresponds to the node that has just expired-- into his own pred[_pid] memory location. If now the node that has just exited the critical section attempts to join it once again, setting his own value of succ_must_wait[_pid] to 1, you will get another instance of circular wait among processes. The correct approach is to merge this part with the code releasing the successor.
The following inline function can be used to release the successor of a given node:
inline release_succ(i)
{
d_step {
pid j;
for (j: 1..4) {
if
:: pred[j] == i ->
pred[j] = 0;
:: else ->
skip;
fi
}
succ_must_wait[i] = 0;
if
:: tail == _pid -> tail = 0;
:: else -> skip;
fi
}
}
The complete model, follows:
byte ncritR;
byte ncritW;
byte tail;
bool succ_must_wait[5];
byte pred[5];
init
{
skip
}
inline release_succ(i)
{
d_step {
pid j;
for (j: 1..4) {
if
:: pred[j] == i ->
pred[j] = 0;
:: else ->
skip;
fi
}
succ_must_wait[i] = 0;
if
:: tail == _pid -> tail = 0;
:: else -> skip;
fi
}
}
active [2] proctype reader()
{
loop:
succ_must_wait[_pid] = 1;
d_step {
pred[_pid] = tail;
tail = _pid;
}
trying:
(succ_must_wait[pred[_pid]] == 0)
ncritR++;
release_succ(_pid);
// critical section
progress_reader:
assert(ncritR > 0);
assert(ncritW == 0);
ncritR--;
goto loop;
}
active [2] proctype writer()
{
loop:
succ_must_wait[_pid] = 1;
d_step {
pred[_pid] = tail;
tail = _pid;
}
trying:
(succ_must_wait[pred[_pid]] == 0) && (ncritR == 0)
ncritW++;
// critical section
progress_writer:
assert(ncritR == 0);
assert(ncritW == 1);
ncritW--;
release_succ(_pid);
goto loop;
}
I added the following properties to the model:
p0: the writer with _pid equal to 4 goes through its progress state infinitely often, provided that it is given the chance to execute some instruction infinitely often:
ltl p0 {
([]<> (_last == 4)) ->
([]<> writer[4]#progress_writer)
};
This property should be true.
p1: there is never more than one reader in the critical section:
ltl p1 {
([] (ncritR <= 1))
};
Obviously, we expect this property to be false in a model that matches your specification.
p2: there is never more than one writer in the critical section:
ltl p2 {
([] (ncritW <= 1))
};
This property should be true.
p3: there isn't any node that is the predecessor of two other nodes at the same time, unless such node is node 0:
ltl p3 {
[] (
(((pred[1] != 0) && (pred[2] != 0)) -> (pred[1] != pred[2])) &&
(((pred[1] != 0) && (pred[3] != 0)) -> (pred[1] != pred[3])) &&
(((pred[1] != 0) && (pred[4] != 0)) -> (pred[1] != pred[4])) &&
(((pred[2] != 0) && (pred[3] != 0)) -> (pred[2] != pred[3])) &&
(((pred[2] != 0) && (pred[4] != 0)) -> (pred[2] != pred[4])) &&
(((pred[3] != 0) && (pred[4] != 0)) -> (pred[3] != pred[4]))
)
};
This property should be true.
p4: it is always true that whenever writer with _pid equal to 4 tries to access the critical section then it will eventually get there:
ltl p4 {
[] (writer[4]#trying -> <> writer[4]#progress_writer)
};
This property should be true.
The outcome of the verification matches our expectations:
~$ spin -search -ltl p0 -a clhrw_lock.pml
...
Full statespace search for:
never claim + (p0)
assertion violations + (if within scope of claim)
acceptance cycles + (fairness disabled)
invalid end states - (disabled by never claim)
State-vector 68 byte, depth reached 3305, errors: 0
...
~$ spin -search -ltl p1 -a clhrw_lock.pml
...
Full statespace search for:
never claim + (p1)
assertion violations + (if within scope of claim)
acceptance cycles + (fairness disabled)
invalid end states - (disabled by never claim)
State-vector 68 byte, depth reached 1692, errors: 1
...
~$ spin -search -ltl p2 -a clhrw_lock.pml
...
Full statespace search for:
never claim + (p2)
assertion violations + (if within scope of claim)
acceptance cycles + (fairness disabled)
invalid end states - (disabled by never claim)
State-vector 68 byte, depth reached 3115, errors: 0
...
~$ spin -search -ltl p3 -a clhrw_lock.pml
...
Full statespace search for:
never claim + (p3)
assertion violations + (if within scope of claim)
acceptance cycles + (fairness disabled)
invalid end states - (disabled by never claim)
State-vector 68 byte, depth reached 3115, errors: 0
...
~$ spin -search -ltl p4 -a clhrw_lock.pml
...
Full statespace search for:
never claim + (p4)
assertion violations + (if within scope of claim)
acceptance cycles + (fairness disabled)
invalid end states - (disabled by never claim)
State-vector 68 byte, depth reached 3115, errors: 0
...
I am working on a promela model that is fairly simple. Using two different modules, it acts as a crosswalk/Traffic light. The first module is the traffic light that outputs the current signal (green, red, yellow, pending). This module also receives as an input a signal called "pedestrian" which acts as an indicator that there are pedestrians wanting to cross. The second module acts as the crosswalk. It receives output signals from the traffic light module (green, yellow, green). It outputs the pedestrian signal to the traffic light module. This module simply defines whether the pedestrian(s) is crossing, waiting or not present. My issue is that once the count value goes to 60, a timeout occurs. I believe the statement "SigG_out ! 1" is causing the error but I do not know why. I have attached the image of the trace I receive from the command line. I am completely new to Spin and Promela and so I am not sure how to use the information form the trace to find my issue in the code. Any help is greatly appreciated.
Here is the code for the complete model:
mtype = {red, green, yellow, pending, none, crossing, waiting};
mtype traffic_mode;
mtype crosswalk_mode;
int count;
chan pedestrian_chan = [0] of {byte};
chan sigR_chan = [0] of {byte};
chan sigG_chan = [0] of {byte};
chan sigY_chan = [0] of {byte};
ltl l1 {!<> (pedestrian_chan[0] == 1) && (traffic_mode == green || traffic_mode == yellow || traffic_mode == pending)}
ltl l2 {[]<> (pedestrian_chan[0] == 1) -> crosswalk_mode == crossing }
proctype traffic_controller(chan pedestrian_in, sigR_out, sigG_out, sigY_out)
{
do
::if
::(traffic_mode == red) ->
count = count + 1;
if
::(count >= 60) ->
sigG_out ! 1;
count = 0;
traffic_mode = green;
:: else -> skip;
fi
::(traffic_mode == green) ->
if
::(count < 60) ->
count = count + 1;
::(pedestrian_in == 1 & count < 60) ->
count = count + 1;
traffic_mode = pending;
::(pedestrian_in == 1 & count >= 60)
count = 0;
traffic_mode = yellow;
fi
::(traffic_mode == pending) ->
count = count + 1;
if
::(count >= 60) ->
sigY_out ! 1;
count = 0;
traffic_mode = yellow;
::else -> skip;
fi
::(traffic_mode == yellow) ->
count = count + 1;
if
::(count >= 5) ->
sigR_out ! 1;
count = 0;
traffic_mode = red;
:: else -> skip;
fi
fi
od
}
proctype crosswalk(chan sigR_in, sigG_in, sigY_in, pedestrian_out)
{
do
::if
::(crosswalk_mode == crossing) ->
if
::(sigG_in == 1) -> crosswalk_mode = none;
fi
::(crosswalk_mode == none) ->
if
:: (1 == 1) -> crosswalk_mode = none
:: (1 == 1) ->
pedestrian_out ! 1
crosswalk_mode = waiting
fi
::(crosswalk_mode == waiting) ->
if
::(sigR_in == 1) -> crosswalk_mode = crossing;
fi
fi
od
}
init
{
count = 0;
traffic_mode = red;
crosswalk_mode = crossing;
atomic
{
run traffic_controller(pedestrian_chan, sigR_chan, sigG_chan, sigY_chan);
run crosswalk(sigR_chan, sigG_chan, sigY_chan, pedestrian_chan);
}
}
You are using channels incorrectly, this line in particular I wouldn't even know how to interpret it:
:: (sigG_in == 1) ->
Your channels are synchronous, which means that whenever a process sends something on one side, another process must listen on the other end of the channel in order to deliver the message. Otherwise, the process blocks until when the situation changes. Your channels are synchronous because you declared them of size 0.
To read from a channel, you need to use the proper syntax:
int some_var;
...
some_channel?some_var;
// here some_var contains value received through some_channel
It seems to be a bit pointless to use three different channels to send different signals. What about using three different values?
mtype = { RED, GREEN, YELLOW };
chan c = [0] of { mtype };
...
c!RED
...
// (some other process)
...
mtype var;
c?var;
// here var contains RED
...
I have an app that iterates over tens of thousands of records using various enumerators (such as directory enumerators)
I am seeing OS X saying my process is "Caught burning CPU" since its taking a large amount of CPU in doing so.
What I would like to do is build in a "pressure valve" such as a
[NSThread sleepForTimeInterval:cpuDelay];
that does not block other processes/threads on things like a dual core machine.
My processing is happening on a separate thread, but I can't break out of and re-enter the enumerator loop and use NSTimers to allow the machine to "breathe"
Any suggestions - should [NSThread sleepForTimeInterval:cpuDelay]; be working?
I run this stuff inside a dispatch queue:
if(!primaryTask)primaryTask=dispatch_queue_create( "com.me.app.task1",backgroundPriorityAttr);
dispatch_async(primaryTask,^{
[self doSync];
});
Try wrapping your processing in NSOperation and set lower QoS priority. Here is a little more information:
http://nshipster.com/nsoperation/
Here is code example I made up. Operation is triggered in view load event:
let processingQueue = NSOperationQueue()
override func viewDidLoad() {
let backgroundOperation = NSBlockOperation {
//My very long and intesive processing
let nPoints = 100000000000
var nPointsInside = 0
for _ in 1...nPoints {
let (x, y) = (drand48() * 2 - 1, drand48() * 2 - 1)
if x * x + y * y <= 1 {
nPointsInside += 1
}
}
let _ = 4.0 * Double(nPointsInside) / Double(nPoints)
}
backgroundOperation.queuePriority = .Low
backgroundOperation.qualityOfService = .Background
processingQueue.addOperation(backgroundOperation)
}
How can I use the Timer class and timer events to turn this loop into one that executes chunks at a time?
My current method of just running the loop keeps freezing up the flash/air UI.
I'm trying to acheive psuedo multithreading. Yes, this is from wavwriter.as:
// Write to file in chunks of converted data.
while (dataInput.bytesAvailable > 0)
{
tempData.clear();
// Resampling logic variables
var minSamples:int = Math.min(dataInput.bytesAvailable/4, 8192);
var readSampleLength:int = minSamples;//Math.floor(minSamples/soundRate);
var resampleFrequency:int = 100; // Every X frames drop or add frames
var resampleFrequencyCheck:int = (soundRate-Math.floor(soundRate))*resampleFrequency;
var soundRateCeil:int = Math.ceil(soundRate);
var soundRateFloor:int = Math.floor(soundRate);
var jlen:int = 0;
var channelCount:int = (numOfChannels-inputNumChannels);
/*
trace("resampleFrequency: " + resampleFrequency + " resampleFrequencyCheck: " + resampleFrequencyCheck
+ " soundRateCeil: " + soundRateCeil + " soundRateFloor: " + soundRateFloor);
*/
var value:Number = 0;
// Assumes data is in samples of float value
for (var i:int = 0;i < readSampleLength;i+=4)
{
value = dataInput.readFloat();
// Check for sanity of float value
if (value > 1 || value < -1)
throw new Error("Audio samples not in float format");
// Special case with 8bit WAV files
if (sampleBitRate == 8)
value = (bitResolution * value) + bitResolution;
else
value = bitResolution * value;
// Resampling Logic for non-integer sampling rate conversions
jlen = (resampleFrequencyCheck > 0 && i % resampleFrequency < resampleFrequencyCheck) ? soundRateCeil : soundRateFloor;
for (var j:int = 0; j < jlen; j++)
{
writeCorrectBits(tempData, value, channelCount);
}
}
dataOutput.writeBytes(tempData);
}
}
I have once implemented pseudo multithreading in AS3 by splitting the task into chunks, instead of splitting the data into chunks.
My solution might not be optimal, but it worked nicely for me in the context of performing a large Depth-First Search while allowing the Flash game to flow nicely.
Use a variable ticks to count computation "ticks", similar to CPU clock cycles. Every time you perform some operation, you increment this counter by 1. Increment it even more after a heavier operation is performed.
In specific parts of your code, insert checkpoints where you check if ticks > threshold, where threshold is a parameter you want to tune after you have this pseudo multithreading working.
If ticks > threshold at the checkpoint, you save the current state of your task, set ticks to zero, then exit the function.
The method has to be retried later, so here you employ a Timer with an interval parameter that should also be tuned later.
When restarting the method, use the saved state of your paused task to detect where your task should be resumed.
For your specific situation, I would suggest splitting the tasks of the for loops, instead of thinking about the while loop. The idea is to interrupt the for loops, remember their state, then continue from there after the resting interval.
To simplify, imagine that we have only the outmost for loop. A sketch of the new method is:
WhileLoop: while (dataInput.bytesAvailable > 0 && ticks < threshold)
{
if(!didSubTaskA) {
// do subtask A...
ticks += 2;
didSubTaskA = true;
}
if(ticks > threshold) {
ticks = 0;
restTimer.reset();
restTimer.start(); // This dispatches an event that should trigger this method
break WhileLoop;
}
for (var i:int = next_unused_i;i < readSampleLength;i+=4) {
next_unused_i = i+1;
// do subtask B...
ticks += 1;
if(ticks > threshold) {
ticks = 0;
restTimer.reset();
restTimer.start();
break WhileLoop;
}
}
next_unused_i = 0;
didSubTaskA = false;
}
if(ticks > threshold) {
ticks = 0;
restTimer.reset();
restTimer.start();
}
The variables ticks, threshold, restTimer, next_unused_i, and didSubTaskA are important, and can't be local method variables. They could be static or class variables. Subtask A is that part where you "Resampling logic variables", and also the variables used there can't be local variables, so make them class variables as well, so their values can persist when you leave and come back to the method.
You can make it look nicer by creating your own Task class, then storing there the whole state of interrupted state of your "threaded"-algorithm. Also, you could maybe make the checkpoint become a local function.
I didn't test the code above so I can't guarantee it works, but the idea is basically that. I hope it helps
When context action such as convert if to switch is invoked, all my comments are lost. Can this be prevented?
e.g.:
int a = 1;
int i;
if (a == 1)
//comment I want to keep 1
i = 1;
else if (a == 2)
//comment I want to keep 2
i = 2;
else
i = 3;
This is a known issue - here it is on YouTrack (and here's a duplicate).