How to programmatically stop sound playback in SuperCollider - audio

I have the following piece of code, which should play a synth function for one second, stop it, play it again after one second, and so on:
t = Task({{
var a;
a = {[0,0,SinOsc.ar(852, 0, 2.2)+SinOsc.ar(1633, 0, 2.2), 0]} ;
a.play;
1.wait;
a.release(5);
1.wait;
}.loop});
t.play;
The problem is, a doesn't stop playing, but additional a's get started on the server. What is wrong here, how can a playing synth be stopped?

In that code, a is a function, so a.release does not tell the synth to stop playing.
Instead, why not write a SynthDef with a 5 second lone envelope on it:
SynthDef(\sines, {arg out = 0, release_dur, gate =1, amp = 0.2;
var sines, env;
env = EnvGen.kr(Env.asr(0.01, amp, release_dur), gate, doneAction:2);
sines = SinOsc.ar(852, 0, 2.2)+SinOsc.ar(1633, 0, 2.2);
Out.ar(out, sines * env);
}).add
t = Task({{
var a;
a = Synth.new(\sines, [\release_dur, 5, \out, 0, \amp, 0.2, \gate, 1]);
1.wait;
a.set(\gate, 0);
1.wait;
}.loop});
t.play;
We'll pass in the release duration as an argument, so you can set it in the task below in the a =Synth line.
Then, when you want to end the synth, send it gate of 0. this tells the envelope to release, which it does over 5 seconds, then the doneAction removes the synth from the server. Note that you will have more than one synth playing at once, because the release time is longer than your wait time.
Also, you've set the amplitude for your sines to be greater than 1. I did not change this in the synthdef above.

Related

Timing is not reliable playing audio using sdl2/mixer in nim

I'm trying to build a simple metronome to learn the nim programming language, and though I can get audio to play, the timing doesn't work. I'm running this on Mac OSX and there is always a lag every third or fourth 'click'
Here's my code:
# nim code to create a metronome
import times, os
import sdl2, sdl2/mixer
sdl2.init(INIT_AUDIO)
var click : ChunkPtr
var channel : cint
var audio_rate : cint
var audio_format : uint16
var audio_buffers : cint = 4096
var audio_channels : cint = 2
if mixer.openAudio(audio_rate, audio_format, audio_channels, audio_buffers) != 0:
quit("There was a problem")
click = mixer.loadWAV("click.wav")
var bpm = 120
var next_click = getTime()
let dur = initDuration(milliseconds = toInt(60000 / bpm))
var last_click = getTime()
while true:
var now = getTime()
if now >= next_click:
next_click = next_click + dur
# discard mixer.playChannelTimed(0, click, 0, cint(500)
discard mixer.playChannel(0, click, 0)
os.sleep(1)
Any idea why the lag?
(by the way, the click.wav file is only one channel and 0.2 seconds long)
The call to os.sleep(1) is unreliable as a high precision timing control. On MacOSX it calls to nanosleep, which states:
If the interval specified in req is not an exact multiple of the
granularity underlying clock (see time(7)), then the interval will be
rounded up to the next multiple. Furthermore, after the sleep
completes, there may still be a delay before the CPU becomes free to
once again execute the calling thread.
As such, you need to find a different more reliable waiting method or simply remove that delay and burn CPU cicles in the hope of being more precise (your program could still get preempted by the OS anyway).

First note played in AKSequencer is off

I am using AKSequencer to create a sequence of notes that are played by an AKMidiSampler. My problem is, at higher tempos the first note always plays with a little delay, no matter what i do.
I tried prerolling the sequence but it won't help. Substituting the AKMidiSampler with an AKSampler or a AKSamplePlayer (and using a callback track to play them) hasn't helped either, though it made me think that the problem probably resides in the sequencer or in the way I create the notes.
Here's an example of what I'm doing (I tried to make it as simple as I could):
import UIKit
import AudioKit
class ViewController: UIViewController {
let sequencer = AKSequencer()
let sampler = AKMIDISampler()
let callbackInst = AKCallbackInstrument()
var metronomeTrack : AKMusicTrack?
var callbackTrack : AKMusicTrack?
let numberOfBeats = 8
let tempo = 280.0
var startTime : TimeInterval = 0
override func viewDidLoad() {
super.viewDidLoad()
print("Begin setup.")
// Load .wav sample in AKMidiSampler
do {
try sampler.loadWav("tick")
} catch {
print("sampler.loadWav() failed")
}
// Create tracks for the sequencer and set midi outputs
metronomeTrack = sequencer.newTrack("metronomeTrack")
callbackTrack = sequencer.newTrack("callbackTrack")
metronomeTrack?.setMIDIOutput(sampler.midiIn)
callbackTrack?.setMIDIOutput(callbackInst.midiIn)
// Setup and start AudioKit
AudioKit.output = sampler
do {
try AudioKit.start()
} catch {
print("AudioKit.start() failed")
}
// Set sequencer tempo
sequencer.setTempo(tempo)
// Create the notes
var midiSequenceIndex = 0
for i in 0 ..< numberOfBeats {
// Add notes to tracks
metronomeTrack?.add(noteNumber: 60, velocity: 100, position: AKDuration(beats: Double(midiSequenceIndex)), duration: AKDuration(beats: 0.5))
callbackTrack?.add(noteNumber: MIDINoteNumber(midiSequenceIndex), velocity: 100, position: AKDuration(beats: Double(midiSequenceIndex)), duration: AKDuration(beats: 0.5))
print("Adding beat number \(i+1) at position: \(midiSequenceIndex)")
midiSequenceIndex += 1
}
// Set the callback
callbackInst.callback = {status, noteNumber, velocity in
if status == .noteOn {
let currentTime = Date().timeIntervalSinceReferenceDate
let noteDelay = currentTime - ( self.startTime + ( 60.0 / self.tempo ) * Double(noteNumber) )
print("Beat number: \(noteNumber) delay: \(noteDelay)")
} else if ( noteNumber == midiSequenceIndex - 1 ) && ( status == .noteOff) {
print("Sequence ended.\n")
self.toggleMetronomePlayback()
} else {return}
}
// Preroll the sequencer
sequencer.preroll()
print("Setup ended.\n")
}
#IBAction func playButtonPressed(_ sender: UIButton) {
toggleMetronomePlayback()
}
func toggleMetronomePlayback() {
if sequencer.isPlaying == false {
print("Playback started.")
startTime = Date().timeIntervalSinceReferenceDate
sequencer.play()
} else {
sequencer.stop()
sequencer.rewind()
}
}
}
Could anyone help? Thank you.
As Aure commented, the start up latency is a known problem. Even with preroll, there is still noticeable latency, especially at higher tempos.
But if you are using a looping sequence, I found that you can sometimes mitigate how noticeable the latency is by setting the 'starting point' of the sequence to a position after the final MIDI event, but within the loop length. If you can find a good position, you can get the latency effects out of the way before it loops back to your content.
Make sure to call setTime() before you need it (e.g., after stopping the sequence, not when you are ready to play) because the setTime()call itself can introduce about 200ms of wonkiness.
Edit:
As an afterthought, you could do the same thing on a non-looping sequence by enabling looping and using an arbitrarily long sequence length. If you needed playback to stop at the end of the MIDI content, you could do this with an AKCallbackInstrument triggered by an MIDI event placed just after the final note.
After a bit of testing I actually found out that it is not the first note that plays off but the subsequent notes that play in advance. Moreover, the amount of notes that play exactly on time when starting the sequencer depends on the set tempo.
The funny thing is that if the tempo is < 400 there will be one note played on time and the others in advance, if it is 400 <= bpm < 800 there will be two notes played correctly and the others in advance and so on, for every 400 bpm increment you get one more note played correctly.
So... since the notes are played in advance and not late, the solution that solved it for me is:
1) Use a sampler that is not connected directly to a track's midi output but has its .play() method called inside a callback.
2) Keep track of when the sequencer gets started
3) At every callback calculate when the note should play in relation to the start time and store what time it actually is, so you can then calculate the offset.
4) use the computed offset to dispatch_async after the offset your .play() method.
And that's it, I tested this on multiple devices and now all the notes play perfectly on time.
I had the same issue, preroll didn't help, but I have managed to solve it with a dedicated sampler for the first notes.
I used a delay on the other sampler, about 0.06 of a second, works like a charm.
Kind of a silly solution but it did the job and I could go on with the project :)
//This is for fixing AK bug that plays the first playback not in delay
let fixDelay = AKDelay()
fixDelay.dryWetMix = 1
fixDelay.feedback = 0
fixDelay.lowPassCutoff = 22000
fixDelay.time = 0.06
fixDelay.start()
let preDelayMixer = AKMixer()
let preFirstMixer = AKMixer()
[playbackSampler,vocalSampler] >>> preDelayMixer >>> fixDelay
[firstNoteVocalSampler, firstRoundPlaybackSampler] >>> preFirstMixer
[fixDelay,preFirstMixer] >>> endMixer

How do I set volume in SuperCollider in Decibels?

I have a simple SinOsc which plays a 432 hz tone. I want to be able to set that tone to -97 dB. Here's what I have so far:
{
SinOsc.ar(432, 0, 0.01 /*edit this for volume*/, 0)
}.play;
Even though I can see how to edit volume, I don't see a way to set the precise dB level.
In case you are wondering why I am doing this, I need a tone to test 24-bit vs. 16-bit audio.
How can I set the precise dB level or get monitoring to show me what level I am at?
Ah, cool to see a SuperCollider question in Top Questions.
I believe the method you're looking for is .dbamp. See the docs.
Example: (from The SuperCollider Book, Chapter 2)
/* Figure 2.6 */
(
SynthDef(\UGen_ex6, {arg gate = 1, roomsize = 200, revtime = 450;
var src, env, gverb;
env = EnvGen.kr(Env([0, 1, 0], [1, 4], [4, -4], 1), gate, doneAction: 2);
src = Resonz.ar(
Array.fill(4, {Dust.ar(6)}),
1760 * [1, 2.2, 3.95, 8.76] +
Array.fill(4, {LFNoise2.kr(1, 20)}),
0.01).sum * 30.dbamp;
gverb = GVerb.ar(
src,
roomsize,
revtime,
// feedback loop damping
0.99,
// input bw of signal
LFNoise2.kr(0.1).range(0.9, 0.7),
// spread
LFNoise1.kr(0.2).range(0.2, 0.6),
// almost no direct source
-60.dbamp,
// some early reflection
-18.dbamp,
// lots of the tail
3.dbamp,
roomsize);
Out.ar(0, gverb * env)
}).add;
)
a = Synth(\UGen_ex6);
If that 0.01 value is the gain, then simply replace it with the result of
10^(-97/20) = 0.00001412537

Flushing on UART doesn't work as expected

I need to write a sequence of values (buffer, ~10bytes) via UART.
This sequence needs to start with a BREAK delimiter, and in my case I need to decrease the baud rate to a lower value.
Details about my environment:
Development board: BeagleBone Black.
Linux Kernel version: 3.8.13-bone70.
Serial driver used by the tty discipline: omap-serial.
What I finally get is something like this:
UARTIOHandler->setBaudRate(B9600);
unsigned char breakChar[] = { 0 };
UARTIOHandler->write(breakChar, 1);
UARTIOHandler->setBaudRate(B19200);
UARTIOHandler->write({1, 2, 3, 4, 5, 6, 7, 8, 9, 10});
The write method is implemented this way:
int UARTIOHandler::write(const std::initializer_list<uchar8> &data) {
uchar8 buffer[data.size()];
int counter = 0;
for(auto i : data) {
buffer[counter++] = i;
}
auto output = ::write(this->fd_write, buffer, data.size());
this->flush();
return output;
}
And finally the flush() method:
void UARTIOHandler::flush() {
tcflush(this->fd_write, TCIOFLUSH);
}
The problem with this code is that the flushing doesn't always work, sometimes the distance between the BREAK and the first byte of data (observed on a scope) is ~500us (which is fine for my application), and sometimes is up to ~3ms.
EDIT: This is the actual behavior:
For the first five seconds everything works fine (the distance between the BREAK and the rest of the message doesn't exceed ~1ms), then, after five seconds there are some frames that exceed this inter-byte timing (for up to ~3ms).
There's always the code that I posted which is executed, so there's no possible way that I somehow forget flushing the buffers.
Why do this variations happen?
I have searched for relevant problems and found this, one workaround described there is to use a delay in front of the tcflush(...) function call. I can't use this method in my application because it will affect the functionality.
Another comment in that topic suggests that this was a bug in the Linux kernel, could this also be in my case?

gstreamer read decibel from buffer

I am trying to get the dB level of incoming audio samples. On every video frame, I update the dB level and draw a bar representing a 0 - 100% value (0% being something arbitrary such as -20.0dB and 100% being 0dB.)
gdouble sum, rms;
sum = 0.0;
guint16 *data_16 = (guint16 *)amap.data;
for (gint i = 0; i < amap.size; i = i + 2)
{
gdouble sample = ((guint16)data_16[i]) / 32768.0;
sum += (sample * sample);
}
rms = sqrt(sum / (amap.size / 2));
dB = 10 * log10(rms);
This was adapted to C from a code sample, marked as the answer, from here. I am wondering what it is that I am missing from this very simple equation.
Answered: jacket was correct about the code loosing the sign, so everything ended up being positive. Also the code 10 * log(rms) is incorrect. It should be 20 * log(rms) as I am converting amplitude to decibels (as a measure of outputted power).
The level element is best for this task (as #ensonic already mentioned) its intended for exactly what you need..
So basically you add to your pipe element called "level", then enable the messages triggering.
Level element then emits messages which contains values of RMS Peak and Decay. RMS is what you need.
You can setup callback function connected to such message event:
audio_level = gst_element_factory_make ("level", "audiolevel");
g_object_set(audio_level, "message", TRUE, NULL);
...
g_signal_connect (bus, "message::element", G_CALLBACK (callback_function), this);
bus variable is of type GstBus.. I hope you know how to work with buses
Then in callback function check for the element name and get the RMS like is described here
There is also normalization algorithm with pow() function to convert to value between 0.0 -> 1.0 which you can use to convert to % as you stated in your question.

Resources