In NodeMCU, gpio.trig() not working as expected - gpio

In the doc, the syntax for gpio.trig is
gpio.trig(pin, [type [, callback_function]])
However, one of three cases yields an error:
gpio.mode( 8, gpio.INT )
gpio.trig( 8 ) -- this works
gpio.trig( 8, 'both' ) -- this bombs
-- ERROR: stdin:1: bad argument #3 to 'trig' (invalid callback type)s
Is there some nuance I'm missing here, or is there a bug in the doc?

Feel free to verify and possibly report this on our GitHub issues list. I can't be sure but this may (but should not) happen if you already cleared the callback on a pin. The docs say
Establish or clear a callback function to run on interrupt for a pin.
So, it may choke if you call gpio.trig multiple times on the same pin w/o a callback function.

Related

Can aio_error be used to poll for completion of aio_write?

We have some code that goes along the lines of
aiocb* aiocbptr = new aiocb;
// populate aiocbptr with info for the write
aio_write( aiocbptr );
// Then do this periodically:
if(aio_error( aiocbptr ) == 0) {
delete aiocbptr;
}
aio_error is meant to return 0 when the write is completed, and hence we assume that we can call delete on aiocbptr at this point.
This mostly seems to work OK, but we recently started experiencing random crashes. The evidence points to the data pointed to by aiocbptr being modified after the call to delete.
Is there any issue using aio_error to poll for aio_write completion like this? Is there a guarantee that the aiocb will not be modified after aio_error has returned 0?
This change seems to indicate that something may have since been fixed with aio_error. We are running on x86 RHEL7 linux with glibc v 2.17, which predates this fix.
We tried using aio_suspend in addition to aio_error, so once aio_error has returned 0, we call aio_suspend, which is meant to wait for the operation to complete. But the operation should have already completed, so aio_suspend should do nothing. However, it seemed to fix the crashes.
Yes, my commit was fixing a missing memory barrier. Using e.g. aio_suspend triggers the memory barrier and thus fixes it too.

Why is Py_None reference count so high?

While reading through the documentation and some SO questions,
I learned that Py_INCREF() should be applied to Py_None and that applying Py_DECREF() to Py_None is a bad idea unless you hold a reference to it.
But when I look up the reference count of Py_None:
from sys import getrefcount
print(getrefcount(None))
It's at 34169. I'd have to Py_DECREF in a loop 30000 times to make my interpreter crash. A single INCREF or DECREF doesnt do anything. Can anybody explain?
There are just a lot of references to None, all over the place. Many of them are implicitly created, like __doc__ for functions and classes with no docstring, or the first element of every code object's co_consts, but plenty are created explicitly. The refcount is a count of those references.
You should not treat None specially when managing references. As with any other object, Py_INCREF and Py_DECREF should be used when you create or destroy a reference to the object in C code, if some other code isn't responsible for that.
I'd have to Py_DECREF in a loop 30000 times to make my interpreter crash.
Okay. So you write a function that mismanages its refcounting and incorrectly decrements None's refcount by 1.
Then your function gets called 30000 times. That's a completely unremarkable number of times to call a function. Your program crashes, quite likely in a location completely unrelated to your broken function, when unrelated reference manipulation somewhere else causes the refcount of None to hit 0 because the value was already too low due to your function.
Or maybe your function only gets called 10000 times. Then, later, when your program is shutting down, most of those None references get cleaned up, and your program crashes messily during shutdown when the refcount of None hits 0 anyway. Maybe that means your program fails to save important data. Maybe it just means your program produces a really unprofessional-looking error message every time it shuts down. Either way, customers are unhappy, and you've got a nasty bug to debug.

QThread::idealThreadCount() always returning "2"

I need to display, in a QSpinBox, the number of cores, or threads, that the CPU has. The problem is:
QThread cpuInfo(this); //get CPU info
ui->spnBx_nmb_nodes->setValue(cpuInfo.idealThreadCount()); //get thread count
This is always returning "2". I tried in a "2 cores/4 threads" notebook; a "4 cores/8 threads" computer and a "12 cores/ 24 threads" server. In all the cases, this is returning "2" as the ideal thread count.
Can someone, please, give me some light?
idealThreadCount()'s implementation is different on different OS's:
On Windows, QThread::idealThreadCount() calls the Win32 function GetNativeSystemInfo() and from its results, returns the dwNumberOfProcessors value from the SYSTEM_INFO struct that call populates.
On Linux (and most other Unix-y OS)'s, QThread::idealThreadCount() calls sysconf(_SC_NPROCESSORS_ONLN) and returns that value.
On MacOS/X (and BSD and iOS), QThread::idealThreadCount() calls sysctl(CTL_HW, HW_NCPU) and returns the value it receives from there.
QThread::idealThreadCount() also contains some other back-end implementations for less-commonly used OS's, which I won't attempt to summarize here; if you need to look for yourself, the code is at lines 461-515 of qtbase/src/corelib/thread/qthread_unix.cpp.
Given all of the above, the question devolves to, why is the OS-command (that Qt is calling through to) returning 2 instead of a more appropriate number? It sounds like a bug to me, although one other possibility is that idealThreadCount() is returning the correct number, but your QSpinBox is clamping that number down to 2 for some reason. If you haven't done so already, I suggest printing out the value returned by cpuInfo.idealThreadCount() directly, in addition to passing it to setValue(), just to be sure.
Try the following code:
auto const value = 8;
auto *nmb_nodes = ui->spnBx_nmb_nodes;
nmb_nodes->setValue(value);
Q_ASSERT(nmb_nodes->value() == value);
My bet is that the assertion will not be fulfilled. So your problem is likely not what you think it is.

Node; Q Promise delay

Here are some simple questions based on behaviour I noticed in the following example running in node:
Q('THING 1').then(console.log.bind(console));
console.log('THING 2');
The output for this is:
> "THING 2"
> "THING 1"
Questions:
1) Why is Q implemented to wait before running the callback on a value that is immediately known? Why isn't Q smart enough to allow the first line to synchronously issue its output before the 2nd line runs?
2) What is the time lapse between "THING 2" and "THING 1" being output? Is it a single process tick?
3) Could there be performance concerns with values that are deeply wrapped in promises? For example, does Q(Q(Q("THING 1"))) asynchronously wait 3 times as long to complete, even though it can be efficiently synchronously resolved?
This is actually done on purpose. It is to make it consistent whether or not the value is known or not. That way there is only one order of evaluation and you can depend on the fact that no matter if the promise has already settled or not, that order will be the same.
Also, doing it otherwise would make it possible to write a code to test if the promise has settled or not and by design it should not be known and acted upon.
This is pretty much the as doing callback-style code like this:
function fun(args, callback) {
if (!args) {
process.nextTick(callback, 'error');
}
// ...
}
so that anyone who calls it with:
fun(x, function (err) {
// A
});
// B
can be sure that A will never run before B.
The spec
See the Promises/A+ Specification, The then Method section, point 4:
onFulfilled or onRejected must not be called until the execution context stack contains only platform code.
See also the the note 1:
Here "platform code" means engine, environment, and promise implementation code. In practice, this requirement ensures that onFulfilled and onRejected execute asynchronously, after the event loop turn in which then is called, and with a fresh stack. This can be implemented with either a "macro-task" mechanism such as setTimeout or setImmediate, or with a "micro-task" mechanism such as MutationObserver or process.nextTick. Since the promise implementation is considered platform code, it may itself contain a task-scheduling queue or "trampoline" in which the handlers are called.
So this is actually mandated by the spec.
It was discussed extensively to make sure that this requirement is clear - see:
https://github.com/promises-aplus/promises-spec/pull/70
https://github.com/promises-aplus/promises-spec/pull/104
https://github.com/promises-aplus/promises-spec/issues/100
https://github.com/promises-aplus/promises-spec/issues/139
https://github.com/promises-aplus/promises-spec/issues/229

Lambda Timing out after calling callback

I'm using two lambda functions with Javascript's 4.3 runtime. I run the first and it calls the second synchronously (sync is the intent). Problem is the second one times out (at 60sec) but it actually reaches a successful finish after only 22 seconds.
Here's the flow between the two Lambda functions:
Lamda function A I am no longer getting CloudWatch logs for but the real problem (I think) is function B which times out for no reason.
Here's some CloudWatch logs to illustrate this:
The code in Function B at the end -- which includes the "Success" log statement see in picture above -- is included below:
Originally I only had the callback(null, 'successful ...') line and not the nodejs 0.10.x way where you called succeed() off of context. In desperation I added both but the result is the same.
Anyone have an idea what's going on? Any way in which I can debug this?
In case the invocation logic between A and B makes a difference in the state that B starts in, here's the invocation:
As Michael - sqlbot said; the issue seems to be that as long as there is an open connection, because of the non empty event loop, calling the callback doesn't terminate the function. Had the same problem with a open Redis connection; solution as stated is context.callbackWaitsForEmptyEventLoop = false;
At least for redis conenctions it helps to quit the connection to redis in order to let Lambda finish the job properly.

Resources