Magnetometer sampling with Android NDK through callbacks - android-ndk

I have been using NDK APIs to obtain samples from the magnetometer through a callback function. According to the documentation, that is the header files sensor.h and looper.h, I simply have to create an event queue and pass as an argument the callback function.
/*
* Creates a new sensor event queue and associate it with a looper.
*/
ASensorEventQueue* ASensorManager_createEventQueue(ASensorManager* manager,
ALooper* looper, int ident, ALooper_callbackFunc callback, void* data);
After creating the queue and enabling the sensor, the callbacks occur and I am able to retrieve samples as events are triggered. So far so good. The problem is that when I call from Java the native function that does the proper initializations and I want to access the retrived samples (consider N samples) from the initialization function. Again, according to the API the callbacks always occur if 1 is returned and 0 is returned to stop the callbacks.
/**
* For callback-based event loops, this is the prototype of the function
* that is called. It is given the file descriptor it is associated with,
* a bitmask of the poll events that were triggered (typically ALOOPER_EVENT_INPUT),
* and the data pointer that was originally supplied.
*
* Implementations should return 1 to continue receiving callbacks, or 0
* to have this file descriptor and callback unregistered from the looper.
*/
typedef int (*ALooper_callbackFunc)(int fd, int events, void* data);
The thing is I can't figure out how these callbacks really work. I mean, after initializating the event queue and forth, the callbacks occur right way and it seems that there is no way to access the retrieved samples from outside of the callback function after returning 0. In fact, the callback function is apparently looping until 0 is returned. Is there anyway to do this? How does this callback function really work?

This was a silly question from me. Declaring a static array, global or not, to save data did, obviously, the trick. Then, just pass the array to Java through JNI inside the callback function after collecting N samples.

Related

How multiple simultaneous requests are handled in Node.js when response is async?

I can imagine situation where 100 requests come to single Node.js server. Each of them require some DB interactions, which is implemented some natively async code - using task queue or at least microtask queue (e.g. DB driver interface is promisified).
How does Node.js return response when request handler stopped being sync? What happens to connection from api/web client where these 100 requests from description originated?
This feature is available at the OS level and is called (funnily enough) asynchronous I/O or non-blocking I/O (Windows also calls/called it overlapped I/O).
At the lowest level, in C (C#/Swift), the operating system provides an API to keep track of requests and responses. There are various APIs available depending on the OS you're on and Node.js uses libuv to automatically select the best available API at compile time but for the sake of understanding how asynchronous API works let's look at the API that is available to all platforms: the select() system call.
The select() function looks something like this:
int select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, time *timeout);
The fd_set data structure is a set/list of file descriptors that you are interested in watching for I/O activity. And remember, in POSIX sockets are also file descriptors. The way you use this API is as follows:
// Pseudocode:
// Say you just sent a request to a mysql database and also sent a http
// request to google maps. You are waiting for data to come from both.
// Instead of calling `read()` which would block the thread you add
// the sockets to the read set:
add mysql_socket to readfds
add maps_socket to readfds
// Now you have nothing else to do so you are free to wait for network
// I/O. Great, call select:
select(2, &readfds, NULL, NULL, NULL);
// Select is a blocking call. Yes, non-blocking I/O involves calling a
// blocking function. Yes it sounds ironic but the main difference is
// that we are not blocking waiting for each individual I/O activity,
// we are waiting for ALL of them
// At some point select returns. This is where we check which request
// matches the response:
check readfds if mysql_socket is set {
then call mysql_handler_callback()
}
check readfds if maps_socket is set {
then call maps_handler_callback()
}
go to beginning of loop
So basically the answer to your question is we check a data structure what socket/file just triggered an I/O activity and execute the appropriate code.
You no doubt can easily spot how to generalize this code pattern: instead of manually setting and checking the file descriptors you can keep all pending async requests and callbacks in a list or array and loop through it before and after the select(). This is in fact what Node.js (and javascript in general) does. And it is this list of callbacks/file-descriptors that is sometimes called the event queue - it is not a queue per-se, just a collection of things you are waiting to execute.
The select() function also has a timeout parameter at the end which can be used to implement setTimeout() and setInterval() and in browsers process GUI events so that we can run code while waiting for I/O. Because remember, select is blocking - we can only run other code if select returns. With careful management of timers we can calculate the appropriate value to pass as the timeout to select.
The fd_set data structure is not actually a linked list. In older implementations it is a bitfield. More modern implementation can improve on the bitfield as long as it complies with the API. But this partly explains why there is so many competing async API like poll, epoll, kqueue etc. They were created to overcome the limitations of select. Different APIs keep track of the file descriptors differently, some use linked lists, some hash tables, some catering for scalability (being able to listen to tens of thousands of sockets) and some catering for speed and most try to do both better than the others. Whatever they use, in the end what is used to store the request is just a data structure that keeps tracks of file descriptors.

Switch execution contexts in CLR without additional threads to get rid of callbacks

I'm writing a .NET wrapper to a C++ lib for accessing custom DB. The lib uses callbacks to return data queried.
void Query(Params p, Func callback);
It calls specified function for every record in returned dataset.
I need to write an interface like so:
Record ReadOne();
It's supposed to block until it gets one record and then return.
To my understanding, such a method should run until library calls back, then freeze this execution and switch context to return single record. On next call to ReadOne() it should switch back and return from callback.
I could see this done with additional thread. My question is: could it be done without additional thread? Are there any context manipulation mechanisms in CLR? Could async/await serve this purpose?

Is it safe to skip calling callback if no action needed in nodejs

scenario 1
function a(callback){
console.log("not calling callback");
}
a(function(callback_res){
console.log("callback_res", callback_res);
});
scenario 2
function a(callback){
console.log("calling callback");
callback(true);
}
a(function(callback_res){
console.log("callback_res", callback_res);
});
will function a be waiting for callback and will not terminate in scenario 1? However program gets terminated in both scenario.
The problem is not safety but intention. If a function accepts a callback, it's expected that it will be called at some point. If it ignores the argument it accepts, the signature is misleading.
This is a bad practice because function signature gives false impression about how a function works.
It also may cause parameter is unused warning in linters.
will function a be waiting for callback and will not terminate in scenario 1?
The function doesn't contain asynchronous code and won't wait for anything. The fact that callbacks are commonly used in asynchronous control flow doesn't mean that they are asynchronous per se.
will function a be waiting for callback and will not terminate in scenario 1?
No. There is nothing in the code you show that waits for a callback to be called.
Passing a callback to a function is just like passing an integer to a function. The function is free to use it or not and it doesn't mean anything more than that to the interpreter. the JS interpreter has no special logic to "wait for a passed callback to get called". That has no effect one way or the other on when the program terminates. It's just a function argument that the called function can decide whether to use or ignore.
As another example, it used to be common to pass two callbacks to a function, one was called upon success and one was called upon error:
function someFunc(successFn, errorFn) {
// do some operation and then call either successFn or errorFn
}
In this case, it was pretty clear that one of these was going to get called and the other was not. There's no need (from the JS interpreter's point of view) to call a passed callback. That's purely the prerogative of the logic of your code.
Now, it would not be a good practice to design a function that shows a callback in the calling signature and then never, ever call that callback. That's just plain wasteful and a misleading design. There are many cases of callbacks that are sometimes called and sometimes not depending upon circumstances. Array.prototype.forEach is one such example. If you call array.forEach(fn) on an empty array, the callback is never called. But, of course, if you call it on a non-empty array, it is called.
If your function carries out asynchronous operations and the point of the callback is to communicate when the asynchronous operation is done and whether it concluded with an error or a value, then it would generally be bad form to have code paths that would never call the callback because it would be natural for a caller to assume the callback is doing to get called eventually. I can imagine there might be some exceptions to this, but they better be documented really well with the doc/comments for the function.
For asynchronous operations, your question reminds me somewhat of this: Do never resolved promises cause memory leak? which might be useful to read.

What's the functionality of the function pm_runtime_put_sync()?

The function pm_runtime_put_sync() is called in spi-omap2-mcspi.c
Can somebody please explain what actually this function call does.
Thank you!
It calls __pm_runtime_idle(dev, RPM_GET_PUT) internally which is documented as
int __pm_runtime_idle(struct device *dev, int rpmflags)
Entry point for runtime idle operations.
* #dev: Device to send idle notification for.
* #rpmflags: Flag bits.
*
* If the RPM_GET_PUT flag is set, decrement the device's usage count and
* return immediately if it is larger than zero. Then carry out an idle
* notification, either synchronous or asynchronous.
*
* This routine may be called in atomic context if the RPM_ASYNC flag is set,
* or if pm_runtime_irq_safe() has been called.
here are the source and Documentation

Proper handling of context data in libaio callbacks?

I'm working with kernel-level async I/O (i.e. libaio.h). Prior to submitting a struct iocb using io_submit I set the callback using io_set_callback that sticks a function pointer in iocb->data. Finally, I get the completed events using io_getevents and run each callback.
I'd like to be able to use some context information within the callback (e.g. a submission timestamp). The only method by which I can think of doing this is to continue using io_getevents, but have iocb->data point to a struct with context and the callback.
Is there any other methods for doing something like this, and is iocb->data guaranteed to be untouched when using io_getevents? My understanding is that there is another method by which libaio will automatically run callbacks which would be an issue if iocb->data wasn't pointing to a function.
Any clarification here would be nice. The documentation on libaio seems to really be lacking.
One solution, which I would imagine is typical, is to "derive" from iocb, and then cast the pointer you get back from io_getevents() to your struct. Something like this:
struct my_iocb {
iocb cb;
void* userdata;
// ... anything else
};
When you issue your jobs, whether you do it one at a time or in a batch, you provide an array of pointers to iocb structs, which means they may point to my_iocb as well.
When you retrieve the notifications back from io_getevents(), you simply cast the io_event::obj pointer to your own type:
io_event events[512];
int num_events = io_getevents(ioctx, 1, 512, events, NULL);
for (int i = 0; i < num_events; ++i) {
my_iocb* job = (my_iocb*)events[i].obj;
// .. do stuff with job
}
If you don't want to block in io_getevents, but instead be notified via a file descriptor (so that you can block in select() or epoll(), which might be more convenient), I would recommend using the (undocumented) eventfd integration.
You can tie an aiocb to an eventfd file descriptor with io_set_eventfd(iocb* cb, int fd). Whenever the job completes, it increments the eventfd by one.
Note, if you use this mechanism, it is very important to never read more jobs from the io context (with io_getevents()) than what the eventfd counter said there were, otherwise you introduce a race condition from when you read the eventfd counter and reap the jobs.

Resources