twilio video but what if I want just audio or just video or both? - audio

This is a question on execution.
on video chat creation. Each user gets a div created for them which is just a black picture and their name.
when they click the start video button in my ui a localVideoTrack is created and published to all subscribers. The code then appends that video track to the UI
But what about when I want a audio only track? But I don't want any video?
What or I want audio and video but then want to mute the audio?
My thought is this.
you create a new local video track and either enable audio or video or both. When you want to change the state of a video track. Like turn off audio, you just create a local track again without audio publish it remove the current video track from the ui and replace it with the new one.
Or I could just use video and audio tracks but I don't know if that is the right move.
input would be appreciate!

Twilio developer evangelist here.
Video tracks and audio tracks are different. Video is only concerned with the camera and visual of a participant. Audio is only concerned with the microphone and the sound of a participant. So when you create a new video track, it should only ask for access to the camera and only publish a single video track. When you create a new audio track, it should only ask for access to a microphone and only publish a single audio track. When you create local tracks, or connect to a room and try to publish both audio and video, then permission is asked for both camera and microphone access, and two tracks, one for video and one for audio, are published.
At any stage after your participant connects to a Twilio Video room you can then publish new video/audio tracks to add new tracks to the participant. You can also unpublish those tracks, to completely remove them from the participant.
Once a track is published, you can then disable/enable the track, which is muting the audio/video without unpublishing it from the room. This is a quicker process than publishing/unpublishing.

Related

Is it possible to track an audio event in Twilio?

Hi would like to know if for example my own audioTrack is muted and I started speaking while muted it can return an event, this will be similar to teams to tell you that you are muted.
Probably the general question if we are able to track AudioEvents while speaking? Because I believe that dominant speaker is the only audio speaking event I see on Twilio. Any hints in obtaining the audio speaking event would be great.
Twilio developer evangelist here.
It sounds like you are using Twilio Video (since you mention dominant speaker events). Twilio Video itself doesn't have "audio speaking" events, neither does the web platform itself.
You can however do some audio analysis in the browser to tell whether a person is making noise and you can compare that to whether their audio track is currently enabled in order to show a warning that they are speaking while muted.
To do so, you would need to access the localParticipant's audio track. From that you can get the underlying mediaStreamTrack, turn it into a MediaStream and then pass it to the web audio API for analysis. I have an example of doing this to show the volume of localParticipant's audio here: https://github.com/philnash/phism/blob/main/client/src/lib/volume-meter.js.
Once you have that volume you can then choose a threshold where you decide a user is trying to speak and then compare whether that threshold is broken while the user is muted.
Let me know if that helps.

Take the audio of the youtube video element

Intro:
I want to play a youtube video clip and be able to define its states during the session (to sync between users). I want that the youtube video will be played on the current chosen devices (webrtc app). E.g - I can choose specific audio output for the app from 3 that I have.
The problem that I have:
I am trying to get the youtube video audio in order to sink the audio to the relevant audio output device that I have. Currently, When I am playing the youtube video, the audio is being played through the current default audio output device and not by the chosen one on my app (I have the selected device id saved).
What I actually want to achieve:
I want to play the youtube player and hear the video audio track with the chosen audio output device (aka chosen speaker) and not by the default one.
What I am using
Currently using React-Player with my addons.
I am using React and Node
Again:
The problem here is that the video will be played on the default audio output of each client (cannot attach it to a specific one)
setSindId is not reachable
Ideas:
Take the video element and get the audio track - not possible with iframe
using youtube API for it - never seen an option
Have some ideas regarding saving it as mp3 and serve the audio + doing sync footwork, I prefer not to.
Please let me know if you have an idea.
Thanks!

Create dynamic audio broadcast stream (node, ffmpeg, ..?)

I have coded a videoboard. Like a soundboard but with video. You go to one URL that's just a black screen and another one which has a list of different videos (sender). When you click one of these videos it plays on the black screen (receiver). If you play 2 different videos at the same time both videos are shown next to each other on the receiver. That's working fine for several months now. It just creates multiple html video-elements with multiple source-tags (x265 mp4 and vp9 webm).
I recently made a discord bot which takes the webm, extracts the opus stream and plays its sound in the voice channel where the bot is connected. This has one disadvantage: It can only play one sound at a time. It happens a lot that there are multiple videos/sounds playing at the same time so this a bit of a bummer.
So I thought I should create a audiostream on the server which hosts the videoboard and just connect the bot to that stream. But I have no clue how to do this. All I know is that it's very likely going to involve ffmpeg.
What would be the best option here? What I think I would need is basically an infinite silence stream and the possibility to add a audio file onto that stream at any point which will play simultaneously with other audio files that were added before and have not ended payback yet. How is that possible? Somehow with m3u8 playlist-files or via rtsp protocol?
Thanks :)
I think it can be helpful for you https://bitbucket.org/kaleniuk_ihor/neuro_vision/src/db_watch/
Also this library was very useful for me https://github.com/kyriesent/node-rtsp-stream you can just install npm i node-rtsp-stream

Access to audio from audio card with WebRTC

I'd like to be able to capture the audio from the audio card of my computer and to dispatch it with WebRTC. However, I am not sure if it's possible or not to have access to the audio directly produced by my computer.
According to this repo https://github.com/niklasenbom/RecordingApp/blob/master/app.js there is a system audio stuff but not sure if it's what I'm looking for.
Thanks,
You can do it by using NAudio. Actually I did the same project myself and will put it in GitHub in a few weeks and update this answer. You can configure the frequency etc. and use it's OnDataAvailable event to dispatch the sound to registered clients.

Does Chromecast single video stream restriction apply to Audio objects?

The Receiver application guidelines state that "Chromecast devices only support 1 concurrent media stream for playback" then goes on to discuss it only in terms of video. I find that just creating a single Audio object like this...
new Audio("/sound/beep1.mp3");
...will prevent any subsequent video from playing. Is this expected behavior?
Chromecast only supports one active media element at a time, so if you have an Audio element/object, then you cannot have another Video element/object.

Resources