Playing beep sound using AVAudioEngine in SwiftUI - audio

I am new to SWift and have tried to play beep sound with fixed frequency, say 600Hz. That's it. Any suggestion is welcome.
Here is an example using AVFoundation I tried, but it plays no sound. I suspect many places but have no clues.
(1) initialzing player
(2) prepared buffer wrongly
(3) misused optional type and doing wrapping wrongly
(4) the code is okay but it does not play on simulator
(5) and surprising many other possibilities
Can you kindly help me?
import SwiftUI
import AVFoundation
struct BeepPlayer: View {
var body: some View {
Button(action: playBeep) {
Text("Play 600Hz Beep")
}
}
func playBeep() {
let engine = AVAudioEngine()
let player = AVAudioPlayerNode()
var mixer = engine.mainMixerNode
engine.attach(player)
var buffer = createBeepBuffer()
player.scheduleBuffer(buffer, at: nil, options: .loops, completionHandler: nil)
engine.connect(player, to: mixer, format: mixer.outputFormat(forBus: 0))
do {
try engine.start()
player.play()
} catch {
print("Error: \(error)")
}
player.stop()
}
func createBeepBuffer() -> AVAudioPCMBuffer {
let sampleRate: Double = 44100
let frequency: Float = 600
let duration: Double = 1
let frames = AVAudioFrameCount(44100 * duration)
let buffer = AVAudioPCMBuffer(pcmFormat: AVAudioFormat(standardFormatWithSampleRate: sampleRate, channels: 2)!, frameCapacity: frames)
buffer?.frameLength = frames
let amplitude: Float = 0.5
let increment = 2 * Float.pi * frequency / Float(sampleRate)
var currentPhase: Float = 0
for i in 0..<Int(frames) {
buffer?.floatChannelData?[0][i] = sin(currentPhase) * amplitude
buffer?.floatChannelData?[1][i] = sin(currentPhase) * amplitude
currentPhase += increment
}
return buffer!
}
}

Related

Trying to output sound with rust

I'm trying to output a simple sound (a square waveform). All seems to work except no sound is going to my speakers. I found cpal as a lib to manage sound, but maybe there is a better way to handle streams. Here is my code
use cpal::{Sample};
use cpal::traits::{DeviceTrait, HostTrait, StreamTrait};
fn main() {
let err_fn = |err| eprintln!("an error occurred on the output audio stream: {}", err);
let host = cpal::default_host();
let device = host.default_output_device().expect("no output device available");
let supported_config = device.default_output_config().unwrap();
println!("Device: {}, Using config: {:?}", device.name().expect("flute"), supported_config);
let config = supported_config.into();
let stream = device.build_output_stream(&config, write_silence, err_fn).unwrap();
stream.play().unwrap();
std::thread::sleep(std::time::Duration::from_millis(3000));
}
fn write_silence(data: &mut [f32], _: &cpal::OutputCallbackInfo) {
let mut counter = 0;
for sample in data.iter_mut() {
let s = if (counter / 20) % 2 == 0 { &1.0 } else { &0.0 };
counter = counter + 1;
*sample = Sample::from(s);
}
println!("{:?}", data);
}
line 27 (println!("{:?}", data)) does output the samples of a square waveform.
I'm running this in the linux/crostini thing of chromebooks. It can be the problem.
Does anyone knows why it's not working ? And is cpal a good start for audio processing ?

How do I call CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer?

I'm trying to figure out how to call this AVFoundation function in Swift. I've spent a ton of time fiddling with declarations and syntax, and got this far. The compiler is mostly happy, but I'm left with one last quandary.
public func captureOutput(
captureOutput: AVCaptureOutput!,
didOutputSampleBuffer sampleBuffer: CMSampleBuffer!,
fromConnection connection: AVCaptureConnection!
) {
let samplesInBuffer = CMSampleBufferGetNumSamples(sampleBuffer)
var audioBufferList: AudioBufferList
var buffer: Unmanaged<CMBlockBuffer>? = nil
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sampleBuffer,
nil,
&audioBufferList,
UInt(sizeof(audioBufferList.dynamicType)),
nil,
nil,
UInt32(kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment),
&buffer
)
// do stuff
}
The compiler complains for the 3rd and 4th arguments:
Address of variable 'audioBufferList' taken before it is initialized
and
Variable 'audioBufferList' used before being initialized
So what am I supposed to do here?
I'm working off of this StackOverflow answer but it's Objective-C. I'm trying to translate it into Swift, but run into this problem.
Or is there possibly a better approach? I need to read the data from the buffer, one sample at a time, so I'm basically trying to get an array of the samples that I can iterate over.
Disclaimer: I have just tried to translate the code from Reading audio samples via AVAssetReader to Swift, and verified that it compiles. I have not
tested if it really works.
// Needs to be initialized somehow, even if we take only the address
var audioBufferList = AudioBufferList(mNumberBuffers: 1,
mBuffers: AudioBuffer(mNumberChannels: 0, mDataByteSize: 0, mData: nil))
var buffer: Unmanaged<CMBlockBuffer>? = nil
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sampleBuffer,
nil,
&audioBufferList,
UInt(sizeof(audioBufferList.dynamicType)),
nil,
nil,
UInt32(kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment),
&buffer
)
// Ensure that the buffer is released automatically.
let buf = buffer!.takeRetainedValue()
// Create UnsafeBufferPointer from the variable length array starting at audioBufferList.mBuffers
let audioBuffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers,
count: Int(audioBufferList.mNumberBuffers))
for audioBuffer in audioBuffers {
// Create UnsafeBufferPointer<Int16> from the buffer data pointer
var samples = UnsafeMutableBufferPointer<Int16>(start: UnsafeMutablePointer(audioBuffer.mData),
count: Int(audioBuffer.mDataByteSize)/sizeof(Int16))
for sample in samples {
// ....
}
}
Swift3 solution:
func loopAmplitudes(audioFileUrl: URL) {
let asset = AVAsset(url: audioFileUrl)
let reader = try! AVAssetReader(asset: asset)
let track = asset.tracks(withMediaType: AVMediaTypeAudio)[0]
let settings = [
AVFormatIDKey : kAudioFormatLinearPCM
]
let readerOutput = AVAssetReaderTrackOutput(track: track, outputSettings: settings)
reader.add(readerOutput)
reader.startReading()
while let buffer = readerOutput.copyNextSampleBuffer() {
var audioBufferList = AudioBufferList(mNumberBuffers: 1, mBuffers: AudioBuffer(mNumberChannels: 0, mDataByteSize: 0, mData: nil))
var blockBuffer: CMBlockBuffer?
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
buffer,
nil,
&audioBufferList,
MemoryLayout<AudioBufferList>.size,
nil,
nil,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
&blockBuffer
);
let buffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers, count: Int(audioBufferList.mNumberBuffers))
for buffer in buffers {
let samplesCount = Int(buffer.mDataByteSize) / MemoryLayout<Int16>.size
let samplesPointer = audioBufferList.mBuffers.mData!.bindMemory(to: Int16.self, capacity: samplesCount)
let samples = UnsafeMutableBufferPointer<Int16>(start: samplesPointer, count: samplesCount)
for sample in samples {
//do something with you sample (which is Int16 amplitude value)
}
}
}
}
The answers posted here make assumptions about the size of the necessary AudioBufferList -- which may have allowed them to have work in their particular circumstance, but didn't work for me when receiving audio from a AVCaptureSession. (Apple's own sample code didn't work either.)
The documentation on CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer is not obvious, but it turns out you can ask the function it how big AudioListBuffer item should be first, and then call it a second time with an AudioBufferList allocated to the size it wants.
Below is a C++ example (sorry, don't know Swift) that shows a more general solution that worked for me.
// ask the function how big the audio buffer list should be for this
// sample buffer ref
size_t requiredABLSize = 0;
err = CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer,
&requiredABLSize,
NULL,
NULL,
kCFAllocatorSystemDefault,
kCFAllocatorSystemDefault,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
NULL);
// allocate an audio buffer list of the required size
AudioBufferList* audioBufferList = (AudioBufferList*) malloc(requiredABLSize);
// ensure that blockBuffer is NULL in case the function fails
CMBlockBufferRef blockBuffer = NULL;
// now let the function allocate fill in the ABL for you
err = CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer,
NULL,
audioBufferList,
requiredABLSize,
kCFAllocatorSystemDefault,
kCFAllocatorSystemDefault,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
&blockBuffer);
// if we succeeded...
if (err == noErr) {
// la la la... read your samples...
}
// release the allocated block buffer
if (blockBuffer != NULL) {
CFRelease(blockBuffer);
blockBuffer = NULL;
}
// release the allocated ABL
if (audioBufferList != NULL) {
free(audioBufferList);
audioBufferList = NULL;
}
I'll leave it up to the Swift experts to offer an implementation in that language.
Martin's answer works and does exactly what I asked in the question, however, after posting the question and spending more time with the problem (and before seeing Martin's answer), I came up with this:
public func captureOutput(
captureOutput: AVCaptureOutput!,
didOutputSampleBuffer sampleBuffer: CMSampleBuffer!,
fromConnection connection: AVCaptureConnection!
) {
let samplesInBuffer = CMSampleBufferGetNumSamples(sampleBuffer)
self.currentZ = Double(samplesInBuffer)
let buffer: CMBlockBufferRef = CMSampleBufferGetDataBuffer(sampleBuffer)
var lengthAtOffset: size_t = 0
var totalLength: size_t = 0
var data: UnsafeMutablePointer<Int8> = nil
if( CMBlockBufferGetDataPointer( buffer, 0, &lengthAtOffset, &totalLength, &data ) != noErr ) {
println("some sort of error happened")
} else {
for i in stride(from: 0, to: totalLength, by: 2) {
// do stuff
}
}
}
This is a slightly different approach, and probably still has room for improvement, but the main point here is that at least on an iPad Mini (and probably other devices), each time this method is called, we get 1,024 samples. But those samples come in an array of 2,048 Int8 values. Every other one is the left/right byte that needs to be combined into to make an Int16 to turn the 2,048 half-samples into 1,024 whole samples.
it works for me. try it:
let musicUrl: NSURL = mediaItemCollection.items[0].valueForProperty(MPMediaItemPropertyAssetURL) as! NSURL
let asset: AVURLAsset = AVURLAsset(URL: musicUrl, options: nil)
let assetOutput = AVAssetReaderTrackOutput(track: asset.tracks[0] as! AVAssetTrack, outputSettings: nil)
var error : NSError?
let assetReader: AVAssetReader = AVAssetReader(asset: asset, error: &error)
if error != nil {
print("Error asset Reader: \(error?.localizedDescription)")
}
assetReader.addOutput(assetOutput)
assetReader.startReading()
let sampleBuffer: CMSampleBufferRef = assetOutput.copyNextSampleBuffer()
var audioBufferList = AudioBufferList(mNumberBuffers: 1, mBuffers: AudioBuffer(mNumberChannels: 0, mDataByteSize: 0, mData: nil))
var blockBuffer: Unmanaged<CMBlockBuffer>? = nil
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sampleBuffer,
nil,
&audioBufferList,
sizeof(audioBufferList.dynamicType), // instead of UInt(sizeof(audioBufferList.dynamicType))
nil,
nil,
UInt32(kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment),
&blockBuffer
)
I do this (swift 4.2):
let n = CMSampleBufferGetNumSamples(audioBuffer)
let format = CMSampleBufferGetFormatDescription(audioBuffer)!
let asbd = CMAudioFormatDescriptionGetStreamBasicDescription(format)!.pointee
let nChannels = Int(asbd.mChannelsPerFrame) // probably 2
let bufferlistSize = AudioBufferList.sizeInBytes(maximumBuffers: nChannels)
let abl = AudioBufferList.allocate(maximumBuffers: nChannels)
for i in 0..<nChannels {
abl[i] = AudioBuffer(mNumberChannels: 0, mDataByteSize: 0, mData: nil)
}
var block: CMBlockBuffer?
var status = CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(audioBuffer, bufferListSizeNeededOut: nil, bufferListOut: abl.unsafeMutablePointer, bufferListSize: bufferlistSize, blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: 0, blockBufferOut: &block)
assert(noErr == status)
// use AudioBufferList here (abl.unsafePointer), e.g. with ExtAudioFileWrite or what have you

Android OpenSL ES - issue with .wav file sampled at 44.1Khz

I'm trying to convert some of my OpenAL code to OpenSL ES for my Android usage (Kitkat 4.4.4) on Genymotion and encountered an issue with .wav files sampled at 44.1Khz. My application is a native one (glue).
I've followed the /native-audio sample of Android NDK samples and fragments from the excellent book Android NDK Beginners Guide, so my code behaves correctly on most of wav/PCM data, except those sampled at 44.1Khz. My specific code is this:
Engine init
// create OpenSL ES engine
SLEngineOption EngineOption[] = {(SLuint32) SL_ENGINEOPTION_THREADSAFE, (SLuint32) SL_BOOLEAN_TRUE};
const SLInterfaceID lEngineMixIIDs[] = {SL_IID_ENGINE};
const SLboolean lEngineMixReqs[] = {SL_BOOLEAN_TRUE};
SLresult res = slCreateEngine(&mEngineObj, 1, EngineOption, 1, lEngineMixIIDs, lEngineMixReqs);
res = (*mEngineObj)->Realize(mEngineObj, SL_BOOLEAN_FALSE);
res = (*mEngineObj)->GetInterface(mEngineObj, SL_IID_ENGINE, &mEngine); // get 'engine' interface
// create output mix (AKA playback; this represents speakers, headset etc.)
res = (*mEngine)->CreateOutputMix(mEngine, &mOutputMixObj, 0,NULL, NULL);
res = (*mOutputMixObj)->Realize(mOutputMixObj, SL_BOOLEAN_FALSE);
Player init
SLresult lRes;
// Set-up sound audio source.
SLDataLocator_AndroidSimpleBufferQueue lDataLocatorIn;
lDataLocatorIn.locatorType = SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE;
lDataLocatorIn.numBuffers = 1; // 1 buffer for a one-time load
// analyze and set correct PCM format
SLDataFormat_PCM lDataFormat;
lDataFormat.formatType = SL_DATAFORMAT_PCM;
lDataFormat.numChannels = audio->wav.channels; // etc. 1,2
lDataFormat.samplesPerSec = audio->wav.sampleRate * 1000; // etc. 44100 * 1000
lDataFormat.bitsPerSample = audio->wav.bitsPerSample; // etc. 16
lDataFormat.containerSize = audio->wav.bitsPerSample;
lDataFormat.channelMask = SL_SPEAKER_FRONT_CENTER;
lDataFormat.endianness = SL_BYTEORDER_LITTLEENDIAN;
SLDataSource lDataSource;
lDataSource.pLocator = &lDataLocatorIn;
lDataSource.pFormat = &lDataFormat;
SLDataLocator_OutputMix lDataLocatorOut;
lDataLocatorOut.locatorType = SL_DATALOCATOR_OUTPUTMIX;
lDataLocatorOut.outputMix = mOutputMixObj;
SLDataSink lDataSink;
lDataSink.pLocator = &lDataLocatorOut;
lDataSink.pFormat = NULL;
const SLInterfaceID lSoundPlayerIIDs[] = { SL_IID_PLAY, SL_IID_ANDROIDSIMPLEBUFFERQUEUE };
const SLboolean lSoundPlayerReqs[] = { SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE };
lRes = (*mEngine)->CreateAudioPlayer(mEngine, &mPlayerObj, &lDataSource, &lDataSink, 2, lSoundPlayerIIDs, lSoundPlayerReqs);
if (lRes != SL_RESULT_SUCCESS) { return; }
lRes = (*mPlayerObj)->Realize(mPlayerObj, SL_BOOLEAN_FALSE);
if (lRes != SL_RESULT_SUCCESS) { return; }
lRes = (*mPlayerObj)->GetInterface(mPlayerObj, SL_IID_PLAY, &mPlayer);
if (lRes != SL_RESULT_SUCCESS) { return; }
lRes = (*mPlayerObj)->GetInterface(mPlayerObj, SL_IID_ANDROIDSIMPLEBUFFERQUEUE, &mPlayerQueue);
if (lRes != SL_RESULT_SUCCESS) { return; }
// register callback on the buffer queue
lRes = (*mPlayerQueue)->RegisterCallback(mPlayerQueue, bqPlayerQueueCallback, NULL);
if (lRes != SL_RESULT_SUCCESS) { return; }
lRes = (*mPlayer)->SetCallbackEventsMask(mPlayer, SL_PLAYEVENT_HEADATEND);
if (lRes != SL_RESULT_SUCCESS) { return; }
// ..fetch the data in 'audio->data' from opened FILE* stream and set 'datasize'
// feed the buffer with data
lRes = (*mPlayerQueue)->Clear(mPlayerQueue); // remove any sound from buffer
lRes = (*mPlayerQueue)->Enqueue(mPlayerQueue, audio->data, datasize);
The above works good for 8000, 22050 and 32000 samples/sec but on 41100 samples, 4 out of 5 times it will repeat itself lots of time on first play. It's like having a door knocking sound effect that actually loops many times (about 50 times) by a single ->SetPlayState(..SL_PLAYSTATE_PLAYING); and in speed. Any obvious error on my code? a multi-threaded issue with these sampling? Anyone else have this kind of problem? Should i downsample on 41.1Khz cases ? Could it be a Genymotion problem? tx
I solved this by downsampling from 44Khz to 22Khz. Interestingly, this only happens on sounds containing 1 channel and 44,100 samples; in all other cases there's no problem.

No sound output with WASAPI

I am having trouble with WASAPI. It do not output any sound and I have been checked the data that writing to the buffer.
Because of it does not output any sound, I haven't any idea to find out the problem.
It may have some problems in following code.
SoundStream::SoundStream() : writtenCursor(0), writeCursor(0), distroy(false)
{
IMMDeviceEnumerator * pEnumerator = nullptr;
HResult(CoCreateInstance(__uuidof(MMDeviceEnumerator), NULL, CLSCTX_ALL, IID_PPV_ARGS(&pEnumerator)));
IMMDevice * pDevice = nullptr;
HResult(pEnumerator->GetDefaultAudioEndpoint(eRender, eMultimedia, &pDevice));
SafeRelease(&pEnumerator);
HResult(pDevice->Activate(__uuidof(IAudioClient), CLSCTX_ALL, NULL, (void**)&pAudioClient));
SafeRelease(&pDevice);
WAVEFORMATEXTENSIBLE * pwfx = nullptr;
hEvent = CreateEvent(NULL, FALSE, FALSE, NULL);
REFERENCE_TIME hnsRequestedDuration = REFTIMES_PER_SEC * 2;
HResult(pAudioClient->GetMixFormat((WAVEFORMATEX**)&pwfx));
HResult(pAudioClient->Initialize(
AUDCLNT_SHAREMODE_SHARED,
AUDCLNT_STREAMFLAGS_EVENTCALLBACK,
hnsRequestedDuration,
0,
(WAVEFORMATEX*)pwfx,
NULL));
pAudioClient->SetEventHandle(hEvent);
channel = (size_t)pwfx->Format.nChannels;
bits = (size_t)pwfx->Format.wBitsPerSample;
validBits = (size_t)pwfx->Samples.wValidBitsPerSample;
frequency = (size_t)pwfx->Format.nSamplesPerSec;
buffer.reshape({ 0, channel, bits >> 3 });
CoTaskMemFree(pwfx);
HResult(pAudioClient->GetBufferSize(&bufferFrameCount));
HResult(pAudioClient->Start());
if (pAudioClient)
{
thread = std::thread([&]()
{
this->Sync();
});
}
}
You could look at my WASAPI.cpp code at http://jdmcox.com (which works fine).
You should also check if the expected wave format is float:
//SubFormat 00000003-0000-0010-8000-00aa00389b71 defines KSDATAFORMAT_SUBTYPE_IEEE_FLOAT
//SubFormat 00000001-0000-0010-8000-00aa00389b71 defines KSDATAFORMAT_SUBTYPE_PCM
GUID G;
WORD V;
WAVEFORMATEX *pwfx = NULL;
bool itsfloat;
pAudioClient->GetMixFormat(&pwfx);
// Do we received a WAVEFORMATEXTENSIBLE?
if(pwfx.cbSize >= 22) {
G = ((WAVEFORMATEXTENSIBLE*)pwfx)->SubFormat;
V = ((WAVEFORMATEXTENSIBLE*)pwfx)->Samples.wValidBitsPerSample;
if (G.Data1 == 3) itsfloat = true;
else if (G.Data1 == 1) itsfloat = false;
}
You know you received a WAVEFORMATEXTENSIBLE and not a simple WAVEFORMATEX because the "pwfx.cbSize >= 22".
See more at:
IAudioClient::GetMixFormat
https://learn.microsoft.com/en-us/windows/win32/api/audioclient/nf-audioclient-iaudioclient-getmixformat
WAVEFORMATEXTENSIBLE
https://learn.microsoft.com/en-us/windows/win32/api/mmreg/ns-mmreg-waveformatextensible
You could look at my WASAPI.cpp code at http://jdmcox.com AGAIN.
Now it works in shared mode as well as exclusive mode.
I should note that no conversion of wave format or wave is necessary in shared mode -- Windows takes care of both converting to and from their format used to mix waves.

How to monitor playback on ALSA via asoundlib?

I'm building an application that allows for ALSA configuration and in the GUI there is a peek meter that allows the client to see playback levels in realtime. I'm having a hard time determining what device to connect to because I don't know if ALSA has a default "loopback" or not and what it's called. I am also having trouble converting the read data into a sample, then finding said sample's amplitude. Here is what I have built so far:
Grab device and set hardware params
if (0 == snd_pcm_open(&pPcm, "default", SND_PCM_STREAM_CAPTURE, SND_PCM_NONBLOCK))
{
if (0 == snd_pcm_set_params(pPcm, SND_PCM_FORMAT_S16_LE, SND_PCM_ACCESS_RW_INTERLEAVED, 1, 96000, 1, 1)) // This last argument confuses me because I'm not given a unit of measurement (second, millisecond, mircosecond, etc.)
{
return snd_pcm_start(pPcm);
}
}
pPcm = nullptr;
return -1;
Read from device and return the peek of the audio signal
int iRtn = -1;
if (nullptr == pPcm)
{
if (-1 == SetupListener())
{
return iRtn;
}
}
// Check to make the state is sane for reading.
if (SND_PCM_STATE_PREPARED == snd_pcm_state(pPcm) ||
SND_PCM_STATE_RUNNING == snd_pcm_state(pPcm))
{
snd_pcm_resume(pPcm); // This might be superfluous.
// The state is sane, read from the stream.
signed short iBuffer = 0;
int iNumRead = snd_pcm_readi(pPcm, &iBuffer, 1);
if (0 < iNumRead)
{
// This calculates an approximation.
// We have some audio data, acquire it's peek in dB. (decibels)
float nSample = static_cast<float>(iBuffer);
float nAmplitude = nSample / MAX_AMPLITUDE_S16; // MAX_AMPLITUDE_S16 is defined as "32767"
float nDecibels = (0 < nAmplitude) ? 20 * log10(nAmplitude) : 0;
iRtn = static_cast<int>(nDecibels); // Cast to integer for GUI element.
}
}
return iRtn;
The ALSA documentation seems very barren and so I apologize if I'm misusing the API.

Resources