Getting audio data every 20 milliseconds in ALSA? - audio

I would like to call snd_pcm_readi() and get AUDIO DATA every 20 ms or every 40 ms. I want to know how to get my data synchronously ... that is every X ms ...
Thanks for any responses.

for realtime audio read or playback, you'll typically create a dedicated high priority worker thread to call from, and then utilize a few ring buffers. your thread should avoid locking.
alsa examples:
http://www.alsa-project.org/alsa-doc/alsa-lib/_2test_2latency_8c-example.html#a36
http://www.alsa-project.org/alsa-doc/alsa-lib/_2test_2pcm_8c-example.html
if you're simply reading from disk, you'll want ample buffer, then just wake up and see if you need to read more before the next time you wake up (considering total latency).

Related

Node's epoll behaviour on socket

I wrote a simple node.js program that sends out 1000 http requests and it records when these requests comes back and just increases counter by 1 upon response. Endpoint is very light weight and it just has simple http resonse without any heavy html. I recorded that it returns me around 200-300 requests per second for 3 seconds. On other hand, when i start this same process 3 times (4 total processes = amount of my available cores) i notice that it performs x4 faster. So i receive aproximately 300 * 4 requests per second back. I want to understand what happens when epoll gets triggered upon Kernel notifying the poll about new file descriptor being ready to compute (new tcp payload arrived). Does v8 take out this filedescriptor and read the data / manipulate with it and where is the actuall bottleneck? Is it in computing and unpacking the payload? It seems that only 1 core is working on sending/receiving these requests for this 1 process and when i start multiple (on amount of my cores), it performs faster.
where is the actuall bottleneck?
Sounds like you're interested in profiling your application. See https://nodejs.org/en/docs/guides/simple-profiling/ for the official documentation on that.
What I can say up front is that V8 does not deal with file descriptors or epoll, that's all Node territory. I don't know Node's internals myself (only V8), but I do know that the code is on https://github.com/nodejs/node, so you can look up how any given feature is implemented.

Is there a way to make time pass faster in linux

I'm not quite sure how timekeeping works in linux short of configuring an NTP server and such.
I am wondering if there is a way for me to make time tick faster in linux. I would like for example for 1 second to tick 10000 times faster than normal.
For clarification I don't want to make time jump like resetting a clock, I would like to increase the tick rate whatever it may be.
This is often needed functionality for simulations and replaying incoming data or events as fast as possible.
The way people solve this issue is that they have an event loop, e.g. libevent or boost::asio. The current time is obtained from the event loop (e.g. the time when epoll has returned) and stored in the event loop variable current time. Instead of using gettimeofday or clock_gettime the time is read from that current time variable. All timers are driven by the event loop current time.
When simulating/replaying, the event loop current time gets assigned the timestamp of the next event, hence eliminating time durations between the events and replaying the events as fast as possible. And your timers still work and fire in between the events as they would in the real-time but without the delays. For this to work your saved event stream that your replay must contain a timestamp of each event, of course.

UDP: doubts about write() and socket timeout (SO_SNDTIMEO) when the socket buffer is full

I'm having some problems understanding how socket buffers and timeouts are managed under Linux, when using UDP. I'm using the OpenWrt embedded Linux distribution, with kernel version 4.14.63.
In order to better understand these concepts, I'm trying to analyze the code that is used by a client of the iPerf open source network measurement program, when sending UDP packets to test parameters such as reachable throughput. It is written in C and C++.
In particular, I tried setting an offered traffic value much higher to what the network (and consequently yhe receiver) can deliver, obtaining as expected a certain packet loss.
In this case, thanks to iPerf computing the loop time after the transmission of each packet, using timestamps, I was able to estimate how much time the application took to write each packet to the UDP buffer.
The packets are actually written inside a while() loop, which calls write() on a socket for each of them.
A timeout is also set, once, on the socket by calling:
setsockopt(mSettings->mSock, SOL_SOCKET, SO_SNDTIMEO, (char *)&timeout, sizeof(timeout))
This should set a send timeout when writing to the socket, which is, of course, a blocking one.
When the buffer is full, the write() call is blocking and I can see the loop time increasing a lot; the problem is that I can't really understand for how much time this call blocks the application.
In general, when a write() is blocked, does it unblock just as there is room for a new packet? Or does it wait more (as it seems to happen; as far as I was able to understand, trying to set a "big" UDP buffer value (800 KB, when sending UDP datagram with a 1470 B payload), it seems to wait for around 700 ms, letting the buffer get emptied, by the networking stack that is continuosly sending data, for more than the space a single packet would require)? Why?
The other doubt I have is related to the timeout: I tried making little modifications to the source code in order to log the return value of each write() call and I was able to observe that no errors are ever ecountered, even when setting a timeout of 300 ms or 600 ms, which is less than the 700 ms value observed before.
By logging also the evolution of the buffer (together with the loop time at each packet transmission), thanks to ioctl:
ioctl(mSettings->mSock,TIOCOUTQ,&bufsize);
I was able to observe, however, that setting the timeout to 300 ms or 600 ms actually made the difference, and made the blocking write() wait for around 300 ms, in the first case, or 600 ms, in the second case, as the full buffer is detected.
So, even though no errors are detected, the timeout seems to actually expire; this, however, seems to lead to a correct write operation in all the cases.
Is this possible? Can it happen because the write() blocked the application for enough time to let it completely write the data when the timeout expires?
I'm a bit confused about this.
Thank you very much in advance.

jMeter adding threads/users (read from CSV Data) to a running thread group

my problem is quite complex.
The matter is to test our web site answers to an increasing amount of requests from different users.
So I can take users/passwords from a CSV Data and launch an HTTP request (with variables readen from the file).
But I don't want to run the thread with all users at same time, but to loop and add at every iteration an other user from the file to the running thread groups (after some delay).
It seems very difficult to do so with jMeter. Perhaps I's need to call a custom java class ?
If I understand you correctly, you just should use Rump up. This parameter control how fast your test will reach maximum threads count.
As explained in JMeter documentation,
The ramp-up period tells JMeter how long to take to "ramp-up" to the
full number of threads chosen. If 10 threads are used, and the ramp-up
period is 100 seconds, then JMeter will take 100 seconds to get all 10
threads up and running. Each thread will start 10 (100/10) seconds
after the previous thread was begun. If there are 30 threads and a
ramp-up period of 120 seconds, then each successive thread will be
delayed by 4 seconds.
Also may be this Throughput Shaping Timer may be helpful for you. You can schedule duration of request with it.
As Jay stated, you can use ramp up to try to control this, though I am not sure the result will be what you are after...though it will add the startup delay. If you have a single thread then each row of the CSV will be processed one at a time, in order.
You can set the thread group to 1 thread and loop forever. In the CSV config you can set a single pass and to terminate the thread on EOF.
CSV Data Set Config-->Recycle on EOF = False
CSV Data Set Config-->Stop thread on EOF = True
Thread Group-->Loop Count = Forever
Also keep in mind that by using BSF and Beanshell you can exact a great deal of control over JMeter.
You should check out UltimateThreadGroup from jmeter-plugins.

realtime midi input and synchronisation with audio

I have built a standalone app version of a project that until now was just a VST/audiounit. I am providing audio support via rtaudio.
I would like to add MIDI support using rtmidi but it's not clear to me how to synchronise the audio and MIDI parts.
In VST/audiounit land, I am used to MIDI events that have a timestamp indicating their offset in samples from the start of the audio block.
rtmidi provides a delta time in seconds since the previous event, but I am not sure how I should grab those events and how I can work out their time in relation to the current sample in the audio thread.
How do plugin hosts do this?
I can understand how events can be sample accurate on playback, but it's not clear how they could be sample accurate when using realtime input.
rtaudio gives me a callback function. I will run at a low block size (32 samples). I guess I will pass a pointer to an rtmidi instance as the userdata part of the callback and then call midiin->getMessage( &message ); inside the audio callback, but I am not sure if this is thread-sensible.
Many thanks for any tips you can give me
In your case, you don't need to worry about it. Your program should send the MIDI events to the plugin with a timestamp of zero as soon as they arrive. I think you have perhaps misunderstood the idea behind what it means to be "sample accurate".
As #Brad noted in his comment to your question, MIDI is indeed very slow. But that's only part of the problem... when you are working in a block-based environment, incoming MIDI events cannot be processed by the plugin until the start of a block. When computers were slower and block sizes of 512 (or god forbid, >1024) were common, this introduced a non-trivial amount of latency which results in the arrangement not sounding as "tight". Therefore sequencers came up with a clever way to get around this problem. Since the MIDI events are already known ahead of time, these events can be sent to the instrument one block early with an offset in sample frames. The plugin then receives these events at the start of the block, and knows not to start actually processing them until N samples have passed. This is what "sample accurate" means in sequencers.
However, if you are dealing with live input from a keyboard or some sort of other MIDI device, there is no way to "schedule" these events. In fact, by the time you receive them, the clock is already ticking! Therefore these events should just be sent to the plugin at the start of the very next block with an offset of 0. Sequencers such as Ableton Live, which allow a plugin to simultaneously receive both pre-sequenced and live events, simply send any live events with an offset of 0 frames.
Since you are using a very small block size, the worst-case scenario is a latency of .7ms, which isn't too bad at all. In the case of rtmidi, the timestamp does not represent an offset which you need to schedule around, but rather the time which the event was captured. But since you only intend to receive live events (you aren't writing a sequencer, are you?), you can simply pass any incoming MIDI to the plugin right away.

Resources