I am using AKSequencer to create a sequence of notes that are played by an AKMidiSampler. My problem is, at higher tempos the first note always plays with a little delay, no matter what i do.
I tried prerolling the sequence but it won't help. Substituting the AKMidiSampler with an AKSampler or a AKSamplePlayer (and using a callback track to play them) hasn't helped either, though it made me think that the problem probably resides in the sequencer or in the way I create the notes.
Here's an example of what I'm doing (I tried to make it as simple as I could):
import UIKit
import AudioKit
class ViewController: UIViewController {
let sequencer = AKSequencer()
let sampler = AKMIDISampler()
let callbackInst = AKCallbackInstrument()
var metronomeTrack : AKMusicTrack?
var callbackTrack : AKMusicTrack?
let numberOfBeats = 8
let tempo = 280.0
var startTime : TimeInterval = 0
override func viewDidLoad() {
super.viewDidLoad()
print("Begin setup.")
// Load .wav sample in AKMidiSampler
do {
try sampler.loadWav("tick")
} catch {
print("sampler.loadWav() failed")
}
// Create tracks for the sequencer and set midi outputs
metronomeTrack = sequencer.newTrack("metronomeTrack")
callbackTrack = sequencer.newTrack("callbackTrack")
metronomeTrack?.setMIDIOutput(sampler.midiIn)
callbackTrack?.setMIDIOutput(callbackInst.midiIn)
// Setup and start AudioKit
AudioKit.output = sampler
do {
try AudioKit.start()
} catch {
print("AudioKit.start() failed")
}
// Set sequencer tempo
sequencer.setTempo(tempo)
// Create the notes
var midiSequenceIndex = 0
for i in 0 ..< numberOfBeats {
// Add notes to tracks
metronomeTrack?.add(noteNumber: 60, velocity: 100, position: AKDuration(beats: Double(midiSequenceIndex)), duration: AKDuration(beats: 0.5))
callbackTrack?.add(noteNumber: MIDINoteNumber(midiSequenceIndex), velocity: 100, position: AKDuration(beats: Double(midiSequenceIndex)), duration: AKDuration(beats: 0.5))
print("Adding beat number \(i+1) at position: \(midiSequenceIndex)")
midiSequenceIndex += 1
}
// Set the callback
callbackInst.callback = {status, noteNumber, velocity in
if status == .noteOn {
let currentTime = Date().timeIntervalSinceReferenceDate
let noteDelay = currentTime - ( self.startTime + ( 60.0 / self.tempo ) * Double(noteNumber) )
print("Beat number: \(noteNumber) delay: \(noteDelay)")
} else if ( noteNumber == midiSequenceIndex - 1 ) && ( status == .noteOff) {
print("Sequence ended.\n")
self.toggleMetronomePlayback()
} else {return}
}
// Preroll the sequencer
sequencer.preroll()
print("Setup ended.\n")
}
#IBAction func playButtonPressed(_ sender: UIButton) {
toggleMetronomePlayback()
}
func toggleMetronomePlayback() {
if sequencer.isPlaying == false {
print("Playback started.")
startTime = Date().timeIntervalSinceReferenceDate
sequencer.play()
} else {
sequencer.stop()
sequencer.rewind()
}
}
}
Could anyone help? Thank you.
As Aure commented, the start up latency is a known problem. Even with preroll, there is still noticeable latency, especially at higher tempos.
But if you are using a looping sequence, I found that you can sometimes mitigate how noticeable the latency is by setting the 'starting point' of the sequence to a position after the final MIDI event, but within the loop length. If you can find a good position, you can get the latency effects out of the way before it loops back to your content.
Make sure to call setTime() before you need it (e.g., after stopping the sequence, not when you are ready to play) because the setTime()call itself can introduce about 200ms of wonkiness.
Edit:
As an afterthought, you could do the same thing on a non-looping sequence by enabling looping and using an arbitrarily long sequence length. If you needed playback to stop at the end of the MIDI content, you could do this with an AKCallbackInstrument triggered by an MIDI event placed just after the final note.
After a bit of testing I actually found out that it is not the first note that plays off but the subsequent notes that play in advance. Moreover, the amount of notes that play exactly on time when starting the sequencer depends on the set tempo.
The funny thing is that if the tempo is < 400 there will be one note played on time and the others in advance, if it is 400 <= bpm < 800 there will be two notes played correctly and the others in advance and so on, for every 400 bpm increment you get one more note played correctly.
So... since the notes are played in advance and not late, the solution that solved it for me is:
1) Use a sampler that is not connected directly to a track's midi output but has its .play() method called inside a callback.
2) Keep track of when the sequencer gets started
3) At every callback calculate when the note should play in relation to the start time and store what time it actually is, so you can then calculate the offset.
4) use the computed offset to dispatch_async after the offset your .play() method.
And that's it, I tested this on multiple devices and now all the notes play perfectly on time.
I had the same issue, preroll didn't help, but I have managed to solve it with a dedicated sampler for the first notes.
I used a delay on the other sampler, about 0.06 of a second, works like a charm.
Kind of a silly solution but it did the job and I could go on with the project :)
//This is for fixing AK bug that plays the first playback not in delay
let fixDelay = AKDelay()
fixDelay.dryWetMix = 1
fixDelay.feedback = 0
fixDelay.lowPassCutoff = 22000
fixDelay.time = 0.06
fixDelay.start()
let preDelayMixer = AKMixer()
let preFirstMixer = AKMixer()
[playbackSampler,vocalSampler] >>> preDelayMixer >>> fixDelay
[firstNoteVocalSampler, firstRoundPlaybackSampler] >>> preFirstMixer
[fixDelay,preFirstMixer] >>> endMixer
Related
I'm trying to play a sound, on a single speaker (mono), from a .wav file in SD card using a STM32H7 controller and freertos environment.
I currently managed to generate sound but it is very dirty and jerky.
I'd like to show the parsed header content of my wav file but my reputation score is below 10.
Most important data are :
format : PCM
1 Channel
Sample rate : 44100
Bit per sample : 16
I initialize the SAI2 block A this way :
void MX_SAI2_Init(void)
{
/* USER CODE BEGIN SAI2_Init 0 */
/* USER CODE END SAI2_Init 0 */
/* USER CODE BEGIN SAI2_Init 1 */
/* USER CODE END SAI2_Init 1 */
hsai_BlockA2.Instance = SAI2_Block_A;
hsai_BlockA2.Init.AudioMode = SAI_MODEMASTER_TX;
hsai_BlockA2.Init.Synchro = SAI_ASYNCHRONOUS;
hsai_BlockA2.Init.OutputDrive = SAI_OUTPUTDRIVE_DISABLE;
hsai_BlockA2.Init.NoDivider = SAI_MASTERDIVIDER_ENABLE;
hsai_BlockA2.Init.FIFOThreshold = SAI_FIFOTHRESHOLD_EMPTY;
hsai_BlockA2.Init.AudioFrequency = SAI_AUDIO_FREQUENCY_44K;
hsai_BlockA2.Init.SynchroExt = SAI_SYNCEXT_DISABLE;
hsai_BlockA2.Init.MonoStereoMode = SAI_MONOMODE;
hsai_BlockA2.Init.CompandingMode = SAI_NOCOMPANDING;
hsai_BlockA2.Init.TriState = SAI_OUTPUT_NOTRELEASED;
if (HAL_SAI_InitProtocol(&hsai_BlockA2, SAI_I2S_STANDARD, SAI_PROTOCOL_DATASIZE_16BIT, 2) != HAL_OK)
{
Error_Handler();
}
/* USER CODE BEGIN SAI2_Init 2 */
/* USER CODE END SAI2_Init 2 */
}
I think I set the clock frequency correctly, as I measure a frame synch clock of 43Khz (closest I can get to 44,1Khz)
The file indicate it's using PCM protocol. My init function indicate SAI_I2S_STANDARD but it's only because I was curious of the result with this parameter value. I have bad result in both cases.
And here is the part where I read the file + send data to the SAI DMA
//Before infinite loop I extract the overall file size in bytes.
// Infinite Loop
for(;;)
{
if(drv_sdcard_getDmaTransferComplete()==true)
{
// BufferRead[0]=0xAA;
// BufferRead[1]=0xAA;
//
// ret = HAL_SAI_Transmit_DMA(&hsai_BlockA2, (uint8_t*)BufferRead, 2);
// drv_sdcard_resetDmaTransferComplete();
if((firstBytesDiscarded == true)&& (remainingBytes>0))
{
//read the next BufferRead size audio samples
if(remainingBytes < sizeof(BufferAudio))
{
remainingBytes -= drv_sdcard_readDataNoRewind(file_audio1_index, BufferAudio, remainingBytes);
}
else
{
remainingBytes -= drv_sdcard_readDataNoRewind(file_audio1_index, BufferAudio, sizeof(BufferAudio));
}
//send them by the SAI through DMA
ret = HAL_SAI_Transmit_DMA(&hsai_BlockA2, (uint8_t*)BufferAudio, sizeof(BufferAudio));
//reset transmit flag for forbidding next transmit
drv_sdcard_resetDmaTransferComplete();
}
else
{
//discard header size first bytes
//I removed this part here because it works properly on my side
firstBytesDiscarded = true;
}
}
I have one track of sound quality improvment : it is to filter speaker input. Yesterday I tried cutting # 20Khz and 44khz but it cut too much the signal... So I want to try different cutting frequencies until I find the sound is of good quality. It is a simple RC filter.
But to fix the jerky part, I dont know what to do. To give you an idea on how the sound comes out, I would describe it like this :
we can hear a bit of melody
then scratchy sound [krrrrrrr]
then short silence
and this looping until the end of the file.
Buffer Audio size is 16*1024 bytes.
Thank you for your help
Problems
No double-buffering. You are reading data from the SD-card into the same buffer that you are playing from. So you'll get some samples from the previous read, and some samples from the new read.
Not checking when the DMA is complete. HAL_SAI_Transmit_DMA() returns immediately, and you cannot call it again until the previous DMA has completed.
Not checking return values of HAL functions. You assign ret = HAL_SAI_Transmit_DMAbut then never check what ret is. You should check if there is an error and take appropriate action.
You seem to be driving things from how fast the SD-card can DMA the data. It needs to be based on how fast the SAI is consuming it, otherwise you will have glitches.
Possible solution
The STM32's DMA controller can be configured to run in circular-buffer mode. In this mode, it will DMA all the data given to it, and then start again from the beginning.
It also provides interrupts for when the DMA is half complete, and when it is fully complete.
These two things together can provide a smooth data transfer with no gaps and glitches, if used with the SAI DMA. You'd read data into the entire buffer to start with, and kick off the DMA. When you get the half-complete interrupt, read half a buffer's worth of data into the first half of the buffer. When you get a fully complete interrupt, read half a buffer's worth of data into the second half of the buffer.
This is psuedo-code-ish, but hopefully shows what I mean:
const size_t buff_len = 16u * 1024u;
uint16_t buff[buff_len];
void start_playback(void)
{
read_from_file(buff, buff_len);
if HAL_SAI_Transmit_DMA(&hsai_BlockA2, buff, buff_len) != HAL_OK)
{
// Handle error
}
}
void sai_dma_tx_half_complete_interrupt(void)
{
read_from_file(buff, buff_len / 2u);
}
void sai_dma_tx_full_complete_interrupt(void)
{
read_from_file(buff + buff_len / 2u, buff_len / 2u);
}
You'd need to detect when you have consumed the entire file, and then stop the DMA (with something like HAL_SAI_DMAStop()).
You might want to read this similar question where I gave a similar answer. They were recording to SD-card rather than playing back, but the same principles apply. They also supplied their actual code for the solution they employed.
I'm trying to use an animation for a sudoku app. I want for everytime i insert a wrong number, that number would change color and it's scale.
My code is:
override fun onDraw(canvas: Canvas?) {
canvas ?: return
drawBoard(canvas)
drawNumberProblem(canvas)
}
private fun drawNumberProblem(canvas: Canvas){
paint.color=darkcolor
paint.textSize = cellSide*3/4
SudokuGame.numbersproblem.forEach { e->
canvas.drawText("${e.number}", originX + e.col * cellSide+cellSide/5, originX + (e.row+1) * cellSide-cellSide/10, paint)
}
}
And i tried:
private fun initAnimation() {
var animation = RotateAnimation(0f, 360f, 150f, 150f)
animation.setRepeatCount(Animation.INFINITE)
animation.setRepeatMode(Animation.RESTART)
animation.setDuration(7500L)
animation.interpolator = LinearInterpolator()
startAnimation(animation)
}
override fun onDraw(canvas: Canvas?) {
canvas ?: return
if(animation==null)
initAnimation()
drawBoard(canvas)
drawNumberProblem(canvas)
}
private fun drawNumberProblem(canvas: Canvas){
paint.color=darkcolor
paint.textSize = cellSide*3/4
SudokuGame.numbersproblem.forEach { e->
canvas.drawText("${e.number}", originX + e.col * cellSide+cellSide/5, originX + (e.row+1) * cellSide-cellSide/10, paint)
}
}
The animation, the board and the numbers are all good. The animation is only an example, i tried to rotate it to see if it's working. But the only problem is that the animation is working for the whole board, i want to have animation only over numbers.
Is there any way to create a initAnimation with a parameter like initAnimation(drawNumberProblem())?
I am new to kotlin animation, so i don't really care about the best way to do it, i want to find a simple way to understand it.
Thanks
If each cell is its own View (say a TextView) you can animate it the way you're trying to, and the animation framework will take care of the timing, the rotation and scaling, etc. Because each view is separate, they can all be animated independently, using Android's view animation libraries, and a lot of the work is taken care of for you - it's pretty easy to use!
If it's all one view, and you're drawing a bunch of elements which can all be animated, you have to keep track of those elements, any animations that should be happening to each one, and how each animation's state affects the element when it comes time to draw it. Instead of each view having its own state and being drawn separately, you have to draw the whole thing at once, because it's a single view. So you need to keep track of those individual element states yourself, so you can refer to them when drawing the current overall state.
So for example, say you've got an animation where an element needs to scale to 2x the size and then back to normal, and it runs for 1 second total (1000ms). When you come to draw that element, you need to know how far along that animation you are at that moment, so you can scale it appropriately, and draw it at the correct size.
There are lots of ways to do this, probably some smarter ones, but this is the most basic hands-on example I think. I'm not testing this, but hopefully it gives you the idea:
// for brevity, so we can just say "now" instead of writing out the whole function call
val now: Long get() = System.currentTimeMillis()
// store a start time for each grid cell (or null if there's no anim running)
val animStartTimes = Array(CELL_COUNT)<Long?>
val animLength = 1000 // millis
// Basic function to start an animation - you could adapt this to prevent restarts
// while an anim is already running, etc
fun startAnim(cellIndex: Int) {
animStartTimes[cellIndex] = now
// tell the view it needs to redraw (since we're animating something now)
invalidate()
}
// Get the current progress of an animation in a cell, from 0% to 100% (0.0 to 1.0)
// I'm treating a finished item as "reset" to its original state
fun getAnimProgress(cellIndex: Int): Float {
val start = animStartTimes[cellIndex]
if (start == null) return 0f
val progress = (now - start) / animLength
if (progress > 1f) {
// animation has ended (past 100% of its runtime) so let's clear it
animStartTimes[cellIndex] = null
return 0f // like I said, I'm treating finished animations as "reset" to 0%
} else return progress
}
override fun onDraw(canvas: Canvas) {
// this flag can be set to true if we find an element that's still animating,
// so we can decide whether to call invalidate() again (forcing a redraw next frame)
var animating = false
items.forEachIndexed { i, item ->
val animProgress = getAnimProgress(i)
if (animProgress > 0f) animating = true // set that flag
// now you need to use that 0.0-1.0 value to calculate your animation state,
// e.g. adjusting the text size by some factor - 0.0 should produce your "default" state
}
// finally, force a redraw next frame if necessary - only do this when your view
// contents might need to change, otherwise you're wasting resources
if (animating) invalidate()
}
I hope that makes sense - obviously I haven't shown how to actually draw the states of your animation, that depends on exactly what you're doing - but that's the basics of it. It's a lot more work than using view animation, but it's not too bad when you get the idea.
The drawing part is a little more complex, and you'll probably want to get familiar with manipulating the Canvas - e.g. to draw a rotated character, you turn the canvas, draw the character as normal, then undo the canvas rotation so it's the right way up again, and the character is tilted. I don't have time to look for any tutorials about it, but this article covers the matrix operations that scale/rotate/etc the canvas
So yeah, it's a bit involved - and depending on what you want to do, a grid of TextViews might be a better shout
Good time of the day. I'm doing an assignment which involves writing a program that solves Traveling Salesman Problem in parallel using brute-force method. I managed to write a working version but it was a lot slower than consequential version due to numerous memory allocations at a high rate which I attempted to limit as in a code bellow using buffered channel.
SO probably won't allow to fit all the code into post so, please view definition and methods of data structures on Github: https://github.com/telephrag/tsp_go
This version doesn't work at all.
At the end t will contain empty path and maximum value of uint64 for traveled distance which is set at the beginning.
As could be seen through debugger, salesman with id of 1 is getting node added to its path at sm.Path[1], but once <-t.RouteQueue occurs in the same call to travel() the program stops despite numerous goroutines are supposedly waiting to write to t.RouteQueue at the moment. It's also confirmed through debugger as well that the program never reaches if-block responsible for setting new shortest path in t.
If we create sync.Waitgroup for each for-loop the program will crash on deadlock.
sync.WaitGroup but remove everything related to channel the program would work but do so very slowly. You can get this version at the first commit to main branch of repository above.
Why does the program ends prematurely?
package tsp
func (t *Tsp) Solve() {
sm := NewSalesman(t)
sm.Visit(0) // sets bit corresponding to given node in bitmask
for nextNode := range graph[0] {
t.RouteQueue <- true // book slot in route queue (buffered channel)
go t.travel(sm.Copy(t), nextNode)
}
}
func (t *Tsp) travel(sm *Salesman, node uint64) {
sm.Distance += graph[sm.TailNode()][node] // increase traveled distance
if sm.Distance > t.MinDist { // terminate if traveled distance is bigger than current minimal
<-t.RouteQueue
return
}
sm.Visit(node)
sm.Count++ // increase amount of nodes traveled
sm.Path[sm.Count] = node // add node to path
if sm.Count == t.NodeCount-1 { // stop if t.NodeCount - 1 nodes traveled
sm.Count++
sm.Distance += graph[node][0] // return to zero-node
t.Mu.Lock()
if t.MinDist > sm.Distance { // set new min distance and path if they are shorter
t.MinPath = sm.Path
t.MinDist = sm.Distance
}
t.Mu.Unlock()
}
<-t.RouteQueue // free the slot for routes
for nextNode := range graph[node] {
if !sm.HasVisited(node) {
t.RouteQueue <- true
go t.travel(sm.Copy(t), nextNode)
}
}
}
I ported the MSDN capture stream example - https://msdn.microsoft.com/en-us/library/windows/desktop/dd370800
and modified it for loopback, exactly as seen here -
https://github.com/slavanap/WaveRec/blob/754b0cfdeec8d1edc59c60d179867ca6088bbfaa/wavetest.cpp
So I am requesting a duration of 1 second recording, and actual duration verifies that it is 1 second.
However I am stuck in an infinite loop in this packet reading here, packetLength is always a value of 448 (numFramesAvailable is also 448, I'm not sure why its never becoming 0 as the while loop is expecting.
https://github.com/slavanap/WaveRec/blob/754b0cfdeec8d1edc59c60d179867ca6088bbfaa/wavetest.cpp#L208-L232
Code is -
while (packetLength != 0)
{
// Get the available data in the shared buffer.
hr = pCaptureClient->GetBuffer(
&pData,
&numFramesAvailable,
&flags, NULL, NULL);
EXIT_ON_ERROR(hr)
if (flags & AUDCLNT_BUFFERFLAGS_SILENT)
{
pData = NULL; // Tell CopyData to write silence.
}
// Copy the available capture data to the audio sink.
// hr = pMySink->CopyData(pData, numFramesAvailable, &bDone);
// EXIT_ON_ERROR(hr)
hr = pCaptureClient->ReleaseBuffer(numFramesAvailable);
EXIT_ON_ERROR(hr)
hr = pCaptureClient->GetNextPacketSize(&packetLength);
EXIT_ON_ERROR(hr)
}
My ported code is in ctypes and is here - https://github.com/Noitidart/ostypes_playground/blob/audio-capture/bootstrap.js#L71-L180
I guess, there is hidden bug in Microsoft example that appears to happen in your case. Inner loop processing time of your code is much enough that new data appears in incoming audio buffer, thus leaving you spin in inner loop forever or until you can process incoming data at decent speed.
Thus, I suggest to add && !bDone condition to inner loop.
while (bDone == FALSE)
{
...
while (packetLength != 0 && bDone == FALSE)
{
...
}
}
I am trying to get the dB level of incoming audio samples. On every video frame, I update the dB level and draw a bar representing a 0 - 100% value (0% being something arbitrary such as -20.0dB and 100% being 0dB.)
gdouble sum, rms;
sum = 0.0;
guint16 *data_16 = (guint16 *)amap.data;
for (gint i = 0; i < amap.size; i = i + 2)
{
gdouble sample = ((guint16)data_16[i]) / 32768.0;
sum += (sample * sample);
}
rms = sqrt(sum / (amap.size / 2));
dB = 10 * log10(rms);
This was adapted to C from a code sample, marked as the answer, from here. I am wondering what it is that I am missing from this very simple equation.
Answered: jacket was correct about the code loosing the sign, so everything ended up being positive. Also the code 10 * log(rms) is incorrect. It should be 20 * log(rms) as I am converting amplitude to decibels (as a measure of outputted power).
The level element is best for this task (as #ensonic already mentioned) its intended for exactly what you need..
So basically you add to your pipe element called "level", then enable the messages triggering.
Level element then emits messages which contains values of RMS Peak and Decay. RMS is what you need.
You can setup callback function connected to such message event:
audio_level = gst_element_factory_make ("level", "audiolevel");
g_object_set(audio_level, "message", TRUE, NULL);
...
g_signal_connect (bus, "message::element", G_CALLBACK (callback_function), this);
bus variable is of type GstBus.. I hope you know how to work with buses
Then in callback function check for the element name and get the RMS like is described here
There is also normalization algorithm with pow() function to convert to value between 0.0 -> 1.0 which you can use to convert to % as you stated in your question.