Please consider the code below:
navigator.mediaDevices.getUserMedia({audio: true}).then(function() {
navigator.mediaDevices.enumerateDevices().then((devices) => {
devices.forEach(function(device1, k1) {
if (device1.kind == 'audiooutput' && device1.deviceId == 'default') {
const speakersGroupId = device1.groupId;
devices.forEach(function(device2, k2) {
if (device2.groupId == speakersGroupId && ['default', 'communications'].includes(device2.deviceId) === false) {
const speakersId = device2.deviceId;
const constraints = {
audio: {
deviceId: {
exact: speakersId
}
}
};
console.log('Requesting stream for deviceId '+speakersId);
navigator.mediaDevices.getUserMedia(constraints).then((stream) => { // **this always fails**
console.log(stream);
});
}
});
}
});
});
});
The code asks for permissions via the first getUserMedia, then enumerates all devices, picks the default audio output then tries to get a stream for that output.
But it will always throw the error: OverconstrainedError { constraint: "deviceId", message: "", name: "OverconstrainedError" } when getting the audio stream.
There is nothing I can do in Chrome (don't care about other browsers, tested Chrome 108 and 109 beta) to get this to work.
I see a report here that it works, but not for me.
Please tell me that I'm doing something wrong, or if there's another way to get the speaker stream that doesn't involve chrome.tabCapture or chrome.desktopCapture.
Chrome MV3 extension ways are welcomed, not only HTML5.
.getUserMedia() is used to get input streams. So, when you tell it to use a speaker device, it can't comply. gUM's error reporting is, umm, confusing (to put it politely).
To use an output device, use element.setSinkId(deviceId). Make an audio or video element, then set its sink id. Here's the MDN example; it creates an audio element. You can also use a preexisting audio or video element.
const devices = await navigator.mediaDevices.enumerateDevices()
const audioDevice = devices.find((device) => device.kind === 'audiooutput')
const audio = document.createElement('audio')
await audio.setSinkId(audioDevice.deviceId)
console.log(`Audio is being played on ${audio.sinkId}`)
Related
I've got webRTC to work on my express server but I want to be able to add the stream of the user dynamically. I looked up in the simple-peer docs and found this:
var Peer = require('simple-peer') // create peer without waiting for media
var peer1 = new Peer({ initiator: true }) // you don't need streams here
var peer2 = new Peer()
peer1.on('signal', data => {
peer2.signal(data)
})
peer2.on('signal', data => {
peer1.signal(data)
})
peer2.on('stream', stream => {
// got remote video stream, now let's show it in a video tag
var video = document.querySelector('video')
if ('srcObject' in video) {
video.srcObject = stream
} else {
video.src = window.URL.createObjectURL(stream) // for older browsers
}
video.play()
})
function addMedia (stream) {
peer1.addStream(stream) // <- add streams to peer dynamically
}
// then, anytime later...
navigator.mediaDevices.getUserMedia({
video: true,
audio: true
}).then(addMedia).catch(() => {})
Peer1 sends a stream to Peer2 dynamically, but it's in the same browser. I'm using socket.io so that people are able to join different rooms. I was using this example to get me started: https://github.com/Dirvann/webrtc-video-conference-simple-peer.
If I use the github example above I understand that I'd have to put:
navigator.mediaDevices.getUserMedia(constraints).then(stream => {
console.log('Received local stream');
localVideo.srcObject = stream;
localStream = stream;
}).catch(e => alert(`getusermedia error ${e.name}`))
In a function. Call init(); then call that function later.
But in the simple-peer example it called addMedia(stream) but how would peer2 receive the stream arg if it wasn't in the same browser? In the github code 'stream' is never sent via socket.emit.
Update:
This is based on the github link.
So I remove the getUserMedia from the beginningand made the init() run on its own.
// add my stream to all peers in the room dynamically
function addMyStreamDynamic(stream) {
for (let index in peers) {
peers[index].addStream(stream);
}
}
function addMyVideoStream() {
navigator.mediaDevices.getUserMedia(constraints).then(stream => {
localVideo.srcObject = stream;
localStream = stream;
addMyStreamDynamic(stream);
}).catch(e => alert(`getUserMedia error ${e.name}`))
}
When calling addMyVideoStream it adds the stream to other peers but it's not complete. When running it before a user joins, the stream does not get sent.
Update2: The code above works but only when the initiator calls it.
It seems that dynamically adding a stream as a non-initiator is much more involved. I just created a dummy stream and later replace the track.
What we're trying to implement?
we deployed an AI model to stream the audio from microphone and display the text of the speech to the user. something like this.
What technologies are used?
Python for back-end and the AI model
React for front-end
web Media Recorder API to record and configure the audio
WebSocket to get connected to the AI API
What's the problem though?
In the front-end, I try to send audio chunks every second as an Int16Array to the back-end. also to make sure everything related to the mic and audio works fine, after stop recording I can download the first chunk of the audio only with duration of 1s which is pretty clear. However, when the audio is sanded to the backend it becomes to some bunch of noise!
Here's the part of the React code when the recording is getting handle:
useEffect(()=> {
if (recorder === null) {
if (isRecording) {
requestRecorder().then(setRecorder, console.error);
} else {
return;
}
}
// Manage recorder state.
if (isRecording && recorder) {
recorder.start();
} else if (!isRecording && recorder) {
recorder.stop();
}
// send the data every second
const ineterval = setInterval(() => {
if (recorder) {
recorder.requestData();
}
}, 1000);
// Obtain the audio when ready.
const handleData = e => {
setAudioURL(URL.createObjectURL(e.data));
let audioData = []
audioData.push(e.data)
const audioBlob = new Blob(audioData, {'type' : 'audio/wav; codecs=0' })
const instanceOfFileReader = new FileReader();
instanceOfFileReader.readAsArrayBuffer(audioBlob);
instanceOfFileReader.addEventListener("loadend", (event) => {
console.log(event.target.result.byteLength);
const arrayBuf = event.target.result
const int16ArrNew = new Int16Array(arrayBuf, 0, Math.floor(arrayBuf.byteLength / 2));
setJsonData(prevstate => ({...prevstate,
matrix: int16ArrNew,}))
})
};
if (recorder) {
recorder.addEventListener("dataavailable", handleData);
}
return () => {
if (recorder) {
recorder.removeEventListener("dataavailable", handleData)
clearInterval(ineterval)
}
};
}, [recorder, isRecording])
Is there anyone faced this issue before? had a lot of research about it but found nothing to fix this.
There is a sound recorder and a recorded sound player on the site. It's fine with Firefox, but I get the following warning in Chronium-based browsers.
[Deprecation] The ScriptProcessorNode is deprecated. Use AudioWorkletNode instead. (https://developers.google.com/web/updates/2017/12/audio-worklet)
this.config = {
bufferLen: 4096,
numChannels: 2,
mimeType: 'audio/ogg'
};
this.recording = false;
this.callbacks = {
getBuffer: [],
exportWAV: []
};
Object.assign(this.config, cfg);
this.context = source.context;
this.node = (this.context.createScriptProcessor || this.context.createJavaScriptNode).call(this.context, this.config.bufferLen, this.config.numChannels, this.config.numChannels);
this.node.onaudioprocess = function (e) {
if (!_this.recording) return;
var buffer = [];
for (var channel = 0; channel < _this.config.numChannels; channel++) {
buffer.push(e.inputBuffer.getChannelData(channel));
}
_this.worker.postMessage({
command: 'record',
buffer: buffer
});
};
Full Code on PasteBin
How can I make this change?
The ScriptProcessorNode has been "deprecated" for years, but it isn't going anywhere anytime soon.
However, you don't need it anyway if you want a Vorbis or Opus audio recording in Ogg. Use the MediaRecorder API instead. https://developer.mozilla.org/en-US/docs/Web/API/MediaRecorder
Here is the problem: when i'm running this code I get a error saying: song_queue.connection.play is not a function. The bot joins the voicechat correctly but the error comes when it tries to play a song. Sorry for the large amount of code but I really want to fix this so my bot can work. I got the code from a YouTube tutorial recorded in discord.js 12.4.1 (my version is the latest 13.1.0) and I think the error has to do with #discordjs/voice. I would really appreciate any help with getting this to work.
const ytdl = require('ytdl-core');
const ytSearch = require('yt-search');
const { joinVoiceChannel, createAudioPlayer, createAudioResource, } = require('#discordjs/voice');
const queue = new Map();
// queue (message.guild.id, queue_constructor object { voice channel, text channel, connection, song[]});
module.exports = {
name: 'play',
aliases: ['skip', 'stop'],
description: 'Advanced music bot',
async execute(message, args, cmd, client, discord){
const voice_channel = message.member.voice.channel;
if (!voice_channel) return message.channel.send('You need to be in a channel to execute this command');
const permissions = voice_channel.permissionsFor(message.client.user);
if (!permissions.has('CONNECT')) return message.channel.send('You dont have permission to do that');
if (!permissions.has('SPEAK')) return message.channel.send('You dont have permission to do that');
const server_queue = queue.get(message.guild.id);
if (cmd === 'play') {
if (!args.length) return message.channel.send('You need to send the second argument');
let song = {};
if (ytdl.validateURL(args[0])){
const song_info = await ytdl.getInfo(args[0]);
song = { title: song_title.videoDetails.title, url: song_info.videoDetails.video_url }
} else {
//If the video is not a URL then use keywords to find that video.
const video_finder = async (query) =>{
const videoResult = await ytSearch(query);
return (videoResult.videos.length > 1) ? videoResult.videos[0] : null;
}
const video = await video_finder(args.join(' '));
if (video){
song = { title: video.title, url: video.url }
} else {
message.channel.send('Error finding your video');
}
}
if (!server_queue){
const queue_constructor = {
voice_channel: voice_channel,
text_channel: message.channel,
connection: null,
songs: []
}
queue.set(message.guild.id, queue_constructor);
queue_constructor.songs.push(song);
try {
const connection = await joinVoiceChannel({
channelId: message.member.voice.channel.id,
guildId: message.guild.id,
adapterCreator: message.guild.voiceAdapterCreator
})
queue_constructor.connection = connection;
video_player(message.guild, queue_constructor.songs[0]);
} catch (err) {
queue.delete(message.guild.id);
message.channel.send('There was an error connecting');
throw err;
}
} else{
server_queue.songs.push(song);
return message.channel.send(`<:seelio:811951350660595772> **${song.title}** added to queue`);
}
}
}
}
const video_player = async (guild, song) => {
const song_queue = queue.get(guild.id);
if(!song) {
song_queue.voice_channel.leave();
queue.delete(guild.id);
return;
}
const stream = ytdl(song.url, { filter: 'audioonly' });
song_queue.connection.play(stream, { seek: 0, volume: 0.5 }).on('finish', () => {
song_queue.songs.shift();
video_player(guild, song_queue.songs[0]);
});
await song_queue.text_channel.send('(`<:seelio:811951350660595772> Now Playing **${song.title}**`)')
}
Discord.js V13 and #discordjs/voice
Since a relatively recent update to the Discord.js library a lot of things have changed in the way you play audio files or streams over your client in a Discord voice channel. There is a really useful guide by Discord to explain a lot of things to you on a base level right here, but I'm going to compress it down a bit and explain to you what is going wrong and how you can get it to work.
Some prerequisites
It is important to note that for anything to do with voice channels for your bot it is necessary to have the GUILD_VOICE_STATES intent in your client. Without it your bot will not actually be able to connect to a voice channel even though it seems like it is. If you don't know what intents are yet, here is the relevant page from the same guide.
Additionally you will need some extra libraries that will help you with processing and streaming audio files. These things will do a lot of stuff in the background that you do not need to worry about them, but without them playing any audio will not work. To find out what you need you can use the generateDependecyReport() function from #discordjs/voice. Here is the page explaining how to use it and what dependencies you will need. To use the function you will have to import it from the #discordjs/voice library.
Playing audio over a client
So once everything is set up you can get to how to play audio and music. You're already a great few steps on the way by using ytdl-core and getting a stream object from it, but audio is not played by using a .play() command on the connection. Instead you will need to utilize AudioPlayer and AudioResource objects.
AudioPlayer
The AudioPlayer is essentially your jukebox. You can make one by simply calling its function and storing that in a const like so:
const player = createAudioPlayer()
This is a function from the #discordjs/voice library and will have to be imported just like generateDependencyReport().
There are a few parameters you can give it to modify its behavior, but right now that is not important. You can read more about that on its page from the Discord guide right here.
AudioResource
To get your AudioPlayer to actually play anything you will have to create an AudioResource. This is basically a version of your file or stream modified to work with the player. This is very simply done with another function from the #discord.js/voice library called createAudioResource(...). This must once again be imported. As a parameter you can parse the location of an mp3 or webm file, but you can also use a stream object like you have already acquired. Just input stream as the parameter of that function.
To now play the resource there are two more steps. First you must subscribe your connection to the player. This basically tells the connection to broadcast whatever your AudioPlayer is playing. To do this simply call the .subscribe() function on your connection object with the player as a parameter like so:
connection.subscribe(player)
player.play(resource)
The second line of code you see above is how you get your player to play your AudioResource. Just parse the resource as a parameter and it will start playing. You can find more on the AudioResource side of things on its page in the Discord guide right here.
This way takes a few more steps than it did in V12, but once you get the hang of this system it really isn't that bad or difficult.
Leaving a voice channel
There is another thing that is going wrong in your code when you try to leave a voice channel. I can see that you did figure out how to join in V13 already, but .leave() unfortunately is no longer a valid function. Now, to leave a voice channel you must retrieve the connection object that you get from calling joinVoiceChannel(...) and call either .disconnect() or .destroy() on it. They are almost the same, but the latter also makes it so that you cannot use the connection again.
Good day! I'm into video chat streaming this morning and I've bumped into a problem with the incoming ArrayBuffer which contains binary data of an audio.
Here is the code I found for playing binary audio data (Uint8Array):
function playByteArray(byteArray) {
var arrayBuffer = new ArrayBuffer(byteArray.length);
var bufferView = new Uint8Array(arrayBuffer);
for (i = 0; i < byteArray.length; i++) {
bufferView[i] = byteArray[i];
}
context.decodeAudioData(arrayBuffer, function(buffer) {
buf = buffer;
play();
});
}
// Play the loaded file
function play() {
// Create a source node from the buffer
var source = context.createBufferSource();
source.buffer = buf;
// Connect to the final output node (the speakers)
source.connect(context.destination);
// Play immediately
source.start(0);
}
Now below, I've used MediaStreamRecorder from https://github.com/streamproc/MediaStreamRecorder to record the stream from getUserMedia. This code will continuously send the recorded binary data to the server.
if (navigator.getUserMedia) {
navigator.getUserMedia({audio: true, video: true}, function(stream) {
video.src = (window.URL || window.webkitURL).createObjectURL(stream); //get this for video strewam url
video.muted = true;
multiStreamRecorder = new MultiStreamRecorder(stream);
multiStreamRecorder.canvas = {
width: video.width,
height: video.height
};
multiStreamRecorder.video = video;
multiStreamRecorder.ondataavailable = function(blobs) {
var audioReader = new FileReader();
audioReader.addEventListener("loadend", function() {
var arrBuf = audioReader.result;
var binary = new Uint8Array(arrBuf);
streamToServ.write(binary);
// streamToServ is the binaryjs client
});
audioReader.readAsArrayBuffer(blobs.audio);
};
multiStreamRecorder.start(1);
}, onVideoFail);
} else {
alert ('failed');
}
Convert the blobs produced (audio and video) to binary and send it to binaryjs which will be played on another client with this:
client.on('stream', function (stream, meta) {
stream.on('data', function(data) {
playByteArray(new Uint8Array(data));
});
});
I had no problems with transferring the binary data but the problem is there is a hiccup sound in the playback significantly on every binary data that was played. Is there something wrong on how I play the incoming ArrayBuffers? I'm also thinking of asking streamproc about this.
Thanks in advance!
Cheers.
I found a solution to this problem by making an audio buffer queueing. Most of the code is from here:
Choppy/inaudible playback with chunked audio through Web Audio API
Thanks.
Not sure if this is the problem, but perhaps instead of source.start(0), you should use source.start(time), where time is where you want to start the source. source.start(0) will start playing immediately. If your byte array comes in faster than real-time, the sources might overlap because you start them as soon as possible.