I'm developing a karaoke application.
I try to provide a funny function.
can I use AudioKit to offline render an audio file with time based dynamic tempo value?
Click the below image and you can get it very soon.
image example
And I post some code here.
// I want to change the tempo for bgm audio file dynamically
self.timePitch = AKTimePitch(self.bgmPlayer)
// here I set the initialized rate value to time Pitch
self.timePitch.rate = 1.0
// support iOS10+
self.out = AKOfflineRenderNode()
self.timePitch.connect(to: self.out)
// make the renderer as AudioKit.out
AudioKit.output = self.out
do {
try AudioKit.start()
} catch {
debugPrint(error.localizedDescription)
}
let url = URL(fileURLWithPath: NSTemporaryDirectory() + "output.caf")
// get total duration
let duration = self.duration()
DispatchQueue.global(qos: .background).async {
do {
let avAudioTime = AVAudioTime(sampleTime: 0, atRate:self.out.avAudioNode.inputFormat(forBus: 0).sampleRate)
// start play BGM
self.bgmPlayer.play(at: avAudioTime)
// and render it to an offline file
try self.out?.renderToURL(url, duration: duration)
// **********
// Question:
// Can I change the tempo value when rendering?
// **********
// stop when finished
self.bgmPlayer.stop()
} catch {
debugPrint(error)
}
}
It really depends on how the dynamic tempo is realized - you can send the audio through time/pitch shifting and render the result.
Related
I'm making a discord bot with Discord.js v14 that records users' audio as individual files and one collective file. As Discord.js streams do not interpolate silence, my question is how to interpolate silence into streams.
My code is based off the Discord.js recording example.
In essence, a privileged user enters a voice channel (or stage), runs /record and all the users in that channel are recorded up until the point that they run /leave.
I've tried using Node packages like combined-stream, audio-mixer, multistream and multipipe, but I'm not familiar enough with Node streams to use the pros of each to fill in the gaps the cons add to the problem. I'm not entirely sure how to go about interpolating silence, either, whether it be through a Transform (likely requires the stream to be continuous, or for the receiver stream to be applied onto silence) or through a sort of "multi-stream" that swaps between piping the stream and a silence buffer. I also have yet to overlay the audio files (e.g, with ffmpeg).
Would it even be possible for a Readable to await an audio chunk and, if none is given within a certain timeframe, push a chunk of silence instead? My attempt at doing so is below (again, based off the Discord.js recorder example):
// CREDIT TO: https://stackoverflow.com/a/69328242/8387760
const SILENCE = Buffer.from([0xf8, 0xff, 0xfe]);
async function createListeningStream(connection, userId) {
// Creating manually terminated stream
let receiverStream = connection.receiver.subscribe(userId, {
end: {
behavior: EndBehaviorType.Manual
},
});
// Interpolating silence
// TODO Increases file length over tenfold by stretching audio?
let userStream = new Readable({
read() {
receiverStream.on('data', chunk => {
if (chunk) {
this.push(chunk);
}
else {
// Never occurs
this.push(SILENCE);
}
});
}
});
/* Piping userStream to file at 48kHz sample rate */
}
As an unnecessary bonus, it would help if it were possible to check whether a user ever spoke or not to eliminate creating empty recordings.
Thanks in advance.
Related:
Record all users in a voice channel in discord js v12
Adding silent frames to a node js stream when no data is received
After a lot of reading about Node streams, the solution I procured was unexpectedly simple.
Create a boolean variable recording that is true when the recording should continue and false when it should stop
Create a buffer to handle backpressuring (i.e, when data is input at a higher rate than its output)
let buffer = [];
Create a readable stream for which the receiving user audio stream is piped into
// New audio stream (with silence)
let userStream = new Readable({
// ...
});
// User audio stream (without silence)
let receiverStream = connection.receiver.subscribe(userId, {
end: {
behavior: EndBehaviorType.Manual,
},
});
receiverStream.on('data', chunk => buffer.push(chunk));
In that stream's read method, handle stream recording with a 48kHz timer to match the sample rate of the user audio stream
read() {
if (recording) {
let delay = new NanoTimer();
delay.setTimeout(() => {
if (buffer.length > 0) {
this.push(buffer.shift());
}
else {
this.push(SILENCE);
}
}, '', '20m');
}
// ...
}
In the same method, also handle ending the stream
// ...
else if (buffer.length > 0) {
// Stream is ending: sending buffered audio ASAP
this.push(buffer.shift());
}
else {
// Ending stream
this.push(null);
}
If we put it all together:
const NanoTimer = require('nanotimer'); // node
/* import NanoTimer from 'nanotimer'; */ // es6
const SILENCE = Buffer.from([0xf8, 0xff, 0xfe]);
async function createListeningStream(connection, userId) {
// Accumulates very, very slowly, but only when user is speaking: reduces buffer size otherwise
let buffer = [];
// Interpolating silence into user audio stream
let userStream = new Readable({
read() {
if (recording) {
// Pushing audio at the same rate of the receiver
// (Could probably be replaced with standard, less precise timer)
let delay = new NanoTimer();
delay.setTimeout(() => {
if (buffer.length > 0) {
this.push(buffer.shift());
}
else {
this.push(SILENCE);
}
// delay.clearTimeout();
}, '', '20m'); // A 20.833ms period makes for a 48kHz frequency
}
else if (buffer.length > 0) {
// Sending buffered audio ASAP
this.push(buffer.shift());
}
else {
// Ending stream
this.push(null);
}
}
});
// Redirecting user audio to userStream to have silence interpolated
let receiverStream = connection.receiver.subscribe(userId, {
end: {
behavior: EndBehaviorType.Manual, // Manually closed elsewhere
},
// mode: 'pcm',
});
receiverStream.on('data', chunk => buffer.push(chunk));
// pipeline(userStream, ...), etc.
}
From here, you can pipe that stream into a fileWriteStream, etc. for individual purposes. Note that it's a good idea to also close the receiverStream whenever recording = false with something like:
connection.receiver.subscriptions.delete(userId);
As well, the userStream should, too be closed if it's not, e.g, the first argument of the pipeline method.
As a side note, although outside the scope of my original question, there are many other modifications you can make to this. For instance, you can prepend silence to the audio before piping the receiverStream's data to the userStream, e.g, to make multiple audio streams of the same length:
// let startTime = ...
let creationTime;
for (let i = startTime; i < (creationTime = Date.now()); i++) {
buffer.push(SILENCE);
}
Happy coding!
I can't find a Node.js npm package that allows me to record the current output of my mac's sound and lets me analyze it.
I'm trying to create a music visualizer that just shows the currently volume of sound playing from the computer.
Does anyone have tips or idea of what kinda of package to use?
You could use 'node-audiorecorder' to capture time series audio data, as a WAV stream. -> node-audiorecorder
This will basically give you float values that you could work with.
Some Example code of above,
// Import module.
const AudioRecorder = require('node-audiorecorder');
// Options is an optional parameter for the constructor call.
// If an option is not given the default value, as seen below, will be used.
const options = {
program: `rec`, // Which program to use, either `arecord`, `rec`, or `sox`.
device: null, // Recording device to use, e.g. `hw:1,0`
bits: 16, // Sample size. (only for `rec` and `sox`)
channels: 1, // Channel count.
encoding: `signed-integer`, // Encoding type. (only for `rec` and `sox`)
format: `S16_LE`, // Encoding type. (only for `arecord`)
rate: 16000, // Sample rate.
type: `wav`, // Format type.
// Following options only available when using `rec` or `sox`.
silence: 2, // Duration of silence in seconds before it stops recording.
thresholdStart: 0.5, // Silence threshold to start recording.
thresholdStop: 0.5, // Silence threshold to stop recording.
keepSilence: true // Keep the silence in the recording.
};
// Optional parameter intended for debugging.
// The object has to implement a log and warn function.
const logger = console;
// Create an instance.
let audioRecorder = new AudioRecorder(options, logger);
You could use the data you have captured and use audio-render For any analysis that you need to do. (The library has some functions to capture data itself)
myAudioStream
.pipe(Render(function (canvas) {
var data = this.getFloatTimeDomainData();
//draw volume, spectrum, spectrogram, waveform — any data you need
}))
.pipe(Speaker());
I'm interested in getting a screenshot from the user and I'm using the getDisplayMedia API to capture the user's screen:
const constraints = { video: true, audio: false };
if (navigator.mediaDevices["getDisplayMedia"]) {
navigator.mediaDevices["getDisplayMedia"](constraints).then(startStream).catch(error);
} else {
navigator.getDisplayMedia(constraints).then(startStream).catch(error);
}
When executed, the browser prompts the user if they want to share their display. After the user accepts the prompt, the provided callback receives a MediaStream. For visualization, I'm binding it directly to a element:
const startStream = (stream: MediaStream) => {
this.video.nativeElement.srcObject = stream;
};
This is simple and very effective so far. Nevertheless, I'm only interested in a single frame and I'd therefore like to manually stop the stream as soon as I've processed it.
What I tried is to remove the video element from the DOM, but Chrome keeps displaying a message that the screen is currently captured. So this only affected the video element but not the stream itself:
I've looked at the Screen Capture API article on MDN but couldn't find any hints on how to stop the stream.
How do I end the stream properly so that the prompt stops as well?
Rather than stopping the stream itself, you can stop its tracks.
Iterate over the tracks using the getTracks method and call stop() on each of them:
stream.getTracks()
.forEach(track => track.stop())
As soon as all tracks are stopped, Chrome's capturing prompt disappears as well.
start screen capture sample code:
async function startCapture() {
logElem.innerHTML = "";
try {
videoElem.srcObject = await navigator.mediaDevices.getDisplayMedia(displayMediaOptions);
dumpOptionsInfo();
} catch(err) {
console.error("Error: " + err);
}
}
stop screen capture sample code:
function stopCapture(evt) {
let tracks = videoElem.srcObject.getTracks();
tracks.forEach(track => track.stop());
videoElem.srcObject = null;
}
more info: MDN - Stopping display capture
I am starting with the sample LibVLCSharp.WPF.Sample and while it plays my VOB, I cannot change the audio track.
I call SetAudioTrack and it always returns false (and doesn't change). Full VLC lets me change the audio track on the same file just fine.
var media = new Media(_libVLC, clip.GetEvoPath(playlist), FromType.FromPath);
var status = await media.Parse();
_mediaPlayer.Playing += (sender, e) =>
{
bool ok = _mediaPlayer.SetAudioTrack(3);
};
_mediaPlayer.Play(media);
When should I call this, to get it to work? Ideally I would like to set the stream before I start playback, but that doesn't work either.
I want to merge video captured by AVCaptureSession() and audio recorded by AVAudioRecorder() into one mp4 file ..
I tried this solution
Swift Merge audio and video files into one video
and this Merging audio and video Swift
but both of them didn't work. They both gave me the same error : Thread1 : Signal SIGABRT
So what is the right way to merge audio and video ?
UPDATE:-
I found where the app crashes every time. It crashes on the code line that tries to get an AVAssetTrack from AVAsset
func merge(audio:URL, withVideo video : URL){
// create composition
let mutableComposition = AVMutableComposition()
// Create the video composition track.
let mutableCompositionVideoTrack : AVMutableCompositionTrack = mutableComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
// Create the audio composition track.
let mutableCompositionAudioTrack : AVMutableCompositionTrack = mutableComposition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: kCMPersistentTrackID_Invalid)
// create media assets and tracks
let videoAsset = AVAsset(url: video)
let videoAssetTrack = videoAsset.tracks(withMediaType: AVMediaTypeVideo)[0] // ** always crashes here ** //
let audioAsset = AVAsset(url: audio)
let audioAssetTrack = audioAsset.tracks(withMediaType: AVMediaTypeAudio)[0] // ** always crashes here ** //
....
}