I am using the icm 20948 sensor, programming on MPLABx running harmonyV3.i2c is communication mode. Can I read and write in a function, does it have to be in a while loop?
For example would:
while(SERCOM1_I2C_IsBusy()) {}
SERCOM1_I2C_Write(ICM20948_ADDRESS,&FAccrangefactor, 1);
function properly in a function. Many thanks
Related
I am doing some audio processing with JS, using Web Audio API
So I've created a custom Audio Worklet Processor in which I am processing some audio.
Here is a small example.
class MyProcessor extends AudioWorkletProcessor {
process (inputs, outputs, parameters) {
const someProcessedNumber = cppApiProcessor.process(inputs,outputs,parameters);
return true; // to keep the processor alive
}
}
You see variable someProcessedNumber comes from a cppApi and I don't know how to let the outer JS world know about that, as the Processor returns boolean (whether keep the node alive or not), and I cannot touch the data in outputs. ( I don't wanna change the outcoming audio, just process and give a number)
How can I do that? Is there a better way to do this?
You can use the port of an AudioWorkletProcessor to send data back to the main thread (or any other thread).
this.port.postMessage(someProcessedNumber);
Every AudioWorkletNode has a port as well which can be used to receive the message.
Using the MessagePort will generate some garbage on the audio thread which makes the garbage collection run from time to time. It's also not the most performant way to transfer data.
If that's an issue you can use a SharedArrayBuffer instead which the AudioWorkletProcessor uses to write the data and the AudioWorkletNode uses to read the data.
ringbuf.js is a library which aims to make this process as easy as possible.
The following functions and fields are part of the same class in a Visual Studio DLL. Data is continuously being read and processed using the run function on a thread. However, getPoints is being accessed in a Qt app on a QTimer. I don't wan't to miss a single processed vector, because it seems it could be skipping leading to jumpy data. What's the safest way to get the points to the updated version?
If possible I'd like an answer that uses the C++ standard library as I've been exploring mutex-es, but it still seems to lead to jumpy data.
vector<float> points;
// std::mutex ioMutex;
// function running on a thread
void run(){
while(running){
//ioMutex.lock()
vector<byte> data = ReadData()
points = processData(data);
//ioMutex.unlock()
}
}
vector<float> getPoints(){
return points;
}
I believe there is a mistake in your code. The while loop will consume all the process activity and will not allow proper functionality of other functions. In Qt, in such continuous loops, usually it is a good habit to use the following because it actually gives other process time to access the event buffer properly. If this dll is written in Qt, please add the following within the while loop
QCoreApplication::processEvents();
The safest (and probably easiest) way to deliver your points-data to the main thread is by calling qApp->postEvent() with an object of a custom QEvent-subclass that contains your vector<float> as a member-variable.
That will cause the event(QEvent *) method of (whatever Qt object you specified as the first argument to postEvent()) to be called from inside the main/GUI thread, and so you can override that method to read the vector<float> out of the QEvent-subclassed object and update the GUI with that data.
I have a node.js program in which I use a stream to write information to a SFTP server. Something like this (simplified version):
var conn = new SSHClient();
process.nextTick(function (){
conn.on('ready', function () {
conn.sftp(function (error, sftp) {
var writeStream = sftp.createWriteStream(filename);
...
writeStream.write(line1);
writeStream.write(line2);
writeStream.write(line3);
...
});
}).connect(...);
});
Note I'm not using the (optional) callback argument (described in the write() API specification) and I'm not sure if this may cause undesired behaviour (i.e. lines not writen in the following order: line1, line2, line3). In other words, I don't know if this alternative (more complex code and not sure if less efficient) should be used:
writeStream.write(line1, ..., function() {
writeStream.write(line2, ..., function() {
writeStream.write(line3);
});
});
(or equivalent alternative using async series())
Empirically in my tests I have always get the file writen in the desired order (I mean, iirst line1, then line2 and finally line3). However, I don't now if this has happened just by chance or the above is the right way of using write().
I understand that writing in stream is in general asynchronous (as all I/O work should be) but I wonder if streams in node.js keep an internal buffer or similar that keeps data ordered, so each write() call doesn't return until the data has been put in this buffer.
Examples of usage of write() in real programs are very welcomed. Thanks!
Does write() (without callback) preserve order in node.js write streams?
Yes it does. It preserves order of your writes to that specific stream. All data you're writing goes through the stream buffer which serializes it.
but I wonder if streams in node.js keep an internal buffer or similar that keeps data ordered, so each write() call doesn't return until the data has been put in this buffer.
Yes, all data does go through a stream buffer. The .write() operation does not return until the data has been successfully copied into the buffer unless an error occurs.
Note, that if you are writing any significant amount of data, you may have to pay attention to flow control (often called back pressure) on the stream. It can back up and may tell you that you need to wait before writing more, but it does buffer your writes in the order you send them.
If the .write() operation returns false, then the stream is telling you that you need to wait for the drain event before writing any more. You can read about this issue in the node.js docs for .write() and in this article about backpressure.
Your code also needs to listen for the error event to detect any errors upon writing the stream. Because the writes are asynchronous, they may occur at some later time and are not necessarily reflected in either the return value from .write() or in the err parameter to the .write() callback. You have to listen for the error event to make sure you see errors on the stream.
This is a conceptual query regarding system level optimisation. My understanding by reading the NodeJS Documentation is that pipes are handy to perform flow control on streams.
Background: I have microphone stream coming in and I wanted to avoid an extra copy operation to conserve overall system MIPS. I understand that for audio streams this is not a great deal of MIPS being spent even if there was a memcopy under the hood, but I also have an extension planned to stream in camera frames at 30fps and UHD resolution. Making multiple copies of UHD resolution pixel data at 30fps is super inefficient, so needed some advice around this.
Example Code:
var spawn = require('child_process').spawn
var PassThrough = require('stream').PassThrough;
var ps = null;
//var audioStream = new PassThrough;
//var infoStream = new PassThrough;
var start = function() {
if(ps == null) {
ps = spawn('rec', ['-b', 16, '--endian', 'little', '-c', 1, '-r', 16000, '-e', 'signed-integer', '-t', 'raw', '-']);
//ps.stdout.pipe(audioStream);
//ps.stderr.pipe(infoStream);
exports.audioStream = ps.stdout;
exports.infoStream = ps.stderr;
}
};
var stop = function() {
if(ps) {
ps.kill();
ps = null;
}
};
//exports.audioStream = audioStream;
//exports.infoStream = infoStream;
exports.startCapture = start;
exports.stopCapture = stop;
Here are the questions:
To be able to perform flow control, does the source.pipe(dest) perform a memcpy from the source memory to the destination memory under the hood OR would it pass the reference in memory to the destination?
The commented code contains a PassThrough class instantiation - I am currently assuming the PassThrough causes memcopies as well, and so I am saving one memcpy operation in the entire system because I added in the above comments?
If I had to create a pipe between a Process and a Spawned Child process (using child_process.spawn() as shown in How to transfer/stream big data from/to child processes in node.js without using the blocking stdio?), I presume that definitely results in memcpy? Is there anyway to make that a reference rather than copy?
Does this behaviour differ from OS to OS? I presume it should be OS agnostic, but asking this anyways.
Thanks in advance for your help. It will help my architecture a great deal.
some url's for reference: https://github.com/nodejs/node/
https://github.com/nodejs/node/blob/master/src/stream_wrap.cc
https://github.com/nodejs/node/blob/master/src/stream_base.cc
https://github.com/libuv/libuv/blob/v1.x/src/unix/stream.c
https://github.com/libuv/libuv/blob/v1.x/src/win/stream.c
i tried writing a complicated / huge explaination based on theese and some other files however i came to the conclusion it would be best to give you a summary of how my experience / reading tells me node internally works:
pipe simply connects streams making it appear as if .on("data", …) is called by .write(…) without anything bloated in between.
now we need to separate the js world from the c++ / c world.
when dealing with data in js we use buffers. https://github.com/nodejs/node/blob/master/src/node_buffer.cc
they simply represent allocated memory with some candy on top to operate with it.
if you connect stdout of a process to some .on("data", …) listener it will copy the incoming chunk into a Buffer object for further usage inside the js world.
inside the js world you have methods like .pause() etc. (as you can see in nodes steam api documentation) to prevent the process to eat memory in case incoming data flows faster than its processed.
connecting stdout of a process and for example an outgoing tcp port through pipe will result in a connection similar to how nginx operates. it will connect theese streams as if they would directly talk to each other by copying incoming data directly to the outgoing stream.
as soon as you pause a stream, node will use internal buffering in case its unable to pause the incoming stream.
so for your scenario you should just do testing.
try to receive data through an incoming stream in node, pause the stream and see what happens.
i'm not sure if node will use internal buffering or if the process you try to run will just halt untill it can continue to send data.
i expect the process to halt untill you continue the stream.
for transfering huge images i recommend transfering them in chunks or to pipe them directly to an outgoing port.
the chunk way would allow you to send the data to multiple clients at once and would keep the memory footprint pretty low.
PS you should take a look at this gist that i just found: https://gist.github.com/joyrexus/10026630
it explains in depth how you can interact with streams
I am trying to create a game server, and currently, I am making it with threads. Every object( a player , monster ), has its own thread with while(1) cycle , in witch particular functions are performed.
And the server basically works like this:
main(){
//some initialization
while(1)
{
//reads clients packet
//directs packet info to a particular object
//object performs some functions
//then server returns result packet back to client
Sleep(1);
}
I have heard that is not efficient to make the server using threads like that,
and I should consider to use Boost::Asio, and make the functions work asynchronously.
But I don't know how then the server would work. I would be grateful if someone would explain how basically such servers work.
Every object( a player , monster ), has its own thread.
I have heard that is not efficient to make the server using threads
like that
You are correct, this is not a scalable design. Consider a large game where you may have 10,000 objects or even a million. Such a design quickly falls apart when you require a thread per object. This is known as the C10K problem.
I should consider to use Boost::Asio, and make the functions work
asynchronously. But I don't know how then the server would work.
I would be grateful if someone would explain how basically such
servers work.
You should start by following the Boost::Asio tutorials, and pay specific attention to the Asynchronous TCP daytime server. The concept of asynchronous programming compared to synchronous programming is not difficult after you understand that the flow of your program is inverted. From a high level, your game server will have an event loop that is driven by a boost::asio::io_service. Overly simplified, it will look like this
int
main()
{
boost::asio::io_service io_service;
// add some work to the io_service
io_service.run(); // start event loop
// should never get here
}
The callback handlers that are invoked from the event loop will chain operations together. That is, once your callback for reading data from a client is invoked, the handler will initiate another asynchronous operation.
The beauty of this design is that it decouples threading from concurrency. Consider a long running operation in your game server, such as reading data from a client. Using asynchronous methods, your game server does not need to wait for the operation to complete. It will be notified when the operation has completed on behalf of the kernel.