I have a requirement for perfect gapless looped audio in a BlackBerry 10 app. My loops are stored as WAV files. The method I'm using for playing them is:
Create a buffer for the WAV file using alutCreateBufferFromFile which returns a bufferID
Create a sound source using alGenSources
Attach the buffer to the source using alSourcei(source, AL_BUFFER, bufferID)
Set the source looping property to true using alSourcei(source, AL_LOOPING, AL_TRUE)
Play the source using alSourcePlay(source)
The audio plays fine most of the time, but during UI transitions (such as when the backlight goes off, or when the app is minimised) the audio stutters.
Any ideas how I can ensure the audio is smooth the whole time?
How do you run a thread/process playing WAV file? Have you had a chance to play around priorities and policies with that thread?
I think these low-level system calls allowing to change process (thread, actually) priority and policy might help:
pthread_setschedprio
pthread_setschedparam
Also, have a look at respective doc pages:
BB10 Priorities
BB10 Scheduling policies
QNX Neutrino MicroKernel
I'd start with setting policy to FIFO and raise priority of the process playing audio file. Hope it helps.
Related
I have an idea that I have been working on, but there are some technical details that I would love to understand before I proceed.
From what I understand, Linux communicates with the underlying hardware through the /dev/. I was messing around with my video cam input to zoom and I found someone explaining that I need to create a virtual device and mount it to the output of another program called v4loop.
My questions are
1- How does Zoom detect the webcams available for input. My /dev directory has 2 "files" called video (/dev/video0 and /dev/video1), yet zoom only detects one webcam. Is the webcam communication done through this video file or not? If yes, why does simply creating one doesn't affect Zoom input choices. If not, how does zoom detect the input and read the webcam feed?
2- can I create a virtual device and write a kernel module for it that feeds the input from a local file. I have written a lot of kernel modules, and I know they have a read, write, release methods. I want to parse the video whenever a read request from zoom is issued. How should the video be encoded? Is it an mp4 or a raw format or something else? How fast should I be sending input (in terms of kilobytes). I think it is a function of my webcam recording specs. If it is 1920x1080, and each pixel is 3 bytes (RGB), and it is recording at 20 fps, I can simply calculate how many bytes are generated per second, but how does Zoom expect the input to be Fed into it. Assuming that it is sending the strean in real time, then it should be reading input every few milliseconds. How do I get access to such information?
Thank you in advance. This is a learning experiment, I am just trying to do something fun that I am motivated to do, while learning more about Linux-hardware communication. I am still a beginner, so please go easy on me.
Apparently, there are two types of /dev/video* files. One for the metadata and the other is for the actual stream from the webcam. Creating a virtual device of the same type as the stream in the /dev directory did result in Zoom recognizing it as an independent webcam, even without creating its metadata file. I did finally achieve what I wanted, but I used OBS Studio virtual camera feature that was added after update 26.0.1, and it is working perfectly so far.
I am working with an application that uses OpenAL API quite extensively. In particular, there are multiple sound sources, non-trivial listener filters, etc.
I want to be able to run this application significantly faster than real-time. At the same time, the sound must be saved for later postprocessing. Is there a way to access the OpenAL output programmatically (virtually) without ever playing the sound on the real playback device?
Ideally, I'd like to have access that would be played during every tick of the main loop of my application. Normally one tick corresponds to one rendered frame (e.g. 1/30th of a second). But in this case we would be running the app as fast as possible.
We ended up using OpenAL Soft to do this. Example:
#include "alext.h"
LPALCLOOPBACKOPENDEVICESOFT alcLoopbackOpenDeviceSOFT;
alcLoopbackOpenDeviceSOFT = alcGetProcAddress(NULL,"alcLoopbackOpenDeviceSOFT");
replace your default device with this device
ALCcontext *context = alcCreateContext(device, attrs);
Set the attrs as you would for your default device
Then in the main loop use:
LPALCRENDERSAMPLESSOFT alcRenderSamplesSOFT;
alcRenderSamplesSOFT = alcGetProcAddress(NULL, "alcRenderSamplesSOFT");
alcRenderSamplesSOFT(device, buffer, 1024);
Here the buffer will store 1024 samples. This code runs faster than real-time, therefore you can sample frames every tick
Are you able to do your required functions with the audio data prior to its being shipped to OpenAL? I've done a lot with javax.sound.sampled when it is untethered by the blocking write() method in SourceDataLine, especially when saving to file rather than playing back.
From what little I know about OpenAL, there is also a blocking process occurs when data is shipped, with a queue of arrays that are managed. I've been meaning to look into this further...
(Probably not being very helpful here. Apologies.)
I'm converting an ESP32 project to a Raspberry Pi zero. One of the project behaviors is to play back sound effects based on specific events or triggers. I prefer to use MP3 format so I can store information about the contents of the file in the ID3TAGs to make the files themselves easier to manage. (there are a lot of them!)
I can find examples of using any number of libraries to play mp3s in python, and I found an example of selecting a device using 'sounddevice' but it seems to want numpy arrays to play sound data.
I'm wondering what the easiest and quickest way is to play mp3 files (or should I go to some other file format with a data stub file for each to do my file management?).
Since these behaviors are played as responses, they need to at least start playback quickly (i.e. not wait for a format conversion to take place). And in some cases, other behaviors (such as voice recognition triggers) are already going to add to potential latency on the device in it's total response time.
EDIT: additional info
quickest means processor speed (pi zeros slow down quick under heavy load)
These are real time responses so any 'lag' converting defeats the purpose of the playback.
Also, the device from seeed is configured as an alsa (asound) device
I have a windows phone 8 app which plays audio streams from a remote location or local files using the BackgroundAudioPlayer. I now want to be able to add audio effects, for example, reverb or echo, etc...
Please could you advise me on how to do this? I haven't been able to find a way of hooking extra audio processing code into the pipeline of audio processing even through I've read much about WASAPI, XAudio2 and looked at many code examples.
Note that the app is written in C# but, from my previous experience with writing audio processing code, I know that I should be writing the audio code in native C++. Roughly speaking, I need to find a point at which there is an audio buffer containing raw PCM data which I can use as an input for my audio processing code which will then write either back to the same buffer or to another buffer which is read by the next stage of audio processing. There need to be ways of synchronizing what happens in my code with the rest of the phone's audio processing mechanisms and, of course, the process needs to be very fast so as not to cause audio glitches. Or something like that; I'm used to how VST works, not how such things might work in the Windows Phone world.
Looking forward to seeing what you suggest...
Kind regards,
Matt Daley
I need to find a point at which there is an audio buffer containing
raw PCM data
AFAIK there's no such point. This MSDN page hints that audio/video decoding is performed not by the OS, but by the Qualcomm chip itself.
You can use something like Mp3Sharp for decoding. This way the mp3 will be decoded on the CPU by your managed code, you can interfere / process however you like, then feed the PCM into the media stream source. Main downside - battery life: the hardware-provided codecs should be much more power-efficient.
I'm working on a project that requires me to sync an audio playback(preferably an mp3 file) with my program.
My program reads a motion file from a txt file and output's it onto the serial port at a particular rate. At the same time an audio file has to be played back on the speaker. This audio file has to be in sync with the data..that is to say after say transmittin 100 bytes of data, the audio mustve played back to a predefined time.
What would be the tools used to play and control audio like this?
a tutorial would be great!
Thanks!!
In general, when working with audio, you want to synchronize other sources to audio. This is for several reasons, but most important is that audio runs on a clock running on its own hardware. You'll have to get timing information from that clock. There is a guide here written for using portaudio, but the principles apply to other situations:
http://www.portaudio.com/docs/portaudio_sync_acmc2003.pdf