This is what I want to achieve.
1.There is a channel. And only selected users can live stream audio to the channel while everyone can listen.
Those selected users can be streaming audio from different devices.
Is it possible to do something like this using streaming engines like wowza etc. If not any directions on how to proceed.
This might look similar to a conference call where some users are muted.
I am looking for a scalable solution.
Related
Hi would like to know if for example my own audioTrack is muted and I started speaking while muted it can return an event, this will be similar to teams to tell you that you are muted.
Probably the general question if we are able to track AudioEvents while speaking? Because I believe that dominant speaker is the only audio speaking event I see on Twilio. Any hints in obtaining the audio speaking event would be great.
Twilio developer evangelist here.
It sounds like you are using Twilio Video (since you mention dominant speaker events). Twilio Video itself doesn't have "audio speaking" events, neither does the web platform itself.
You can however do some audio analysis in the browser to tell whether a person is making noise and you can compare that to whether their audio track is currently enabled in order to show a warning that they are speaking while muted.
To do so, you would need to access the localParticipant's audio track. From that you can get the underlying mediaStreamTrack, turn it into a MediaStream and then pass it to the web audio API for analysis. I have an example of doing this to show the volume of localParticipant's audio here: https://github.com/philnash/phism/blob/main/client/src/lib/volume-meter.js.
Once you have that volume you can then choose a threshold where you decide a user is trying to speak and then compare whether that threshold is broken while the user is muted.
Let me know if that helps.
Do any of you, know a way to get the audio stream of a music platform and plug it to the Web Audio API ?
I am doing a music visualizer based on the Web Audio API. It currently reads sounds from the mic of my computer and process a real-time visualization. If I play music loud enough, my viz works !
But now I'd like to move on and only read the sound coming from my computer, so that the visualization render only to the music and no other sound such as people chatting.
I know I can buffer MP3 file in that API and it would work perfectly. But in 2020, streaming music is very common, via Deezer, Spotify, Souncloud etc.
I know they all have an API but they often offer an SDK where you cannot really do more than "play" music. There is no easy access to the stream of audio data. Maybe I am wrong and that is why I ask your help.
Thanks
The way to stream music to WebAudio is to use a MediaElementAudioSourceNode or MediaStreamAudioSourceNode. However, these nodes will output zero unless you're allowed to access the data. This means you have to set the CORS property correctly on your end and also requires the server to allow the access through CORS.
A google search will help with setting up CORS. But many sites won't allow access unless you have the right permissions. Then you are out of luck.
I find a "no-code" work around. At least on Ubuntu 18.04, I am able to tell Firefox to take my speakers as the "microphone input".
You just have to select the good "mic" in the list when your browser asks for mic permission.
That solution is very convenient since I do not need to write platform-specific binding-code to access to the audio stream
I'd like to be able to capture the audio from the audio card of my computer and to dispatch it with WebRTC. However, I am not sure if it's possible or not to have access to the audio directly produced by my computer.
According to this repo https://github.com/niklasenbom/RecordingApp/blob/master/app.js there is a system audio stuff but not sure if it's what I'm looking for.
Thanks,
You can do it by using NAudio. Actually I did the same project myself and will put it in GitHub in a few weeks and update this answer. You can configure the frequency etc. and use it's OnDataAvailable event to dispatch the sound to registered clients.
I am working on a project for large group broadcasting in WebRTC since it needs to work on iOS and Android devices, I am using Kurento, and iOSWEBRTC cordvoa plugin to build this I am curious if anyone can help improve my plan, or if there is a easier way to achieve this.
We need to have a video/audio conference with 5 people per room, however we need to be able to show that video to large audiences. Now my idea would be use Kurento as a middle-man and capture the streams into .webm files for live playback as the conference is going on.
Is there a better way to achieve this? And how would I playback the webm file as it is being recorded, it needs to update and continue playing as more video is sent, basically a live stream copy of the camera.
I am unsure if I am going the best route but I figured that would reduce the bandwidth from my original idea, I originally was thinking of making it like this:
5 person conference for broadcasters X number of viewers then downloaded those streams however I realize the upload bandwidth requirement would be crazy high, that is why I settled on this idea. Additionally the viewers do not have to see real time like the broadcasters. They need to be able to see and communicate with each other at the same time and the viewers can be a few seconds behind.
TL;DR:
Trying to make a 5 person video conference with video/audio capturing to then live stream it to viewers players. This would allow avoiding of PeerConnection bandwidth limitations. Would this work or am I forgetting something?
You'll need to look into using an SFU or MCU. An MCU is very costly, but multiplexes video streams and sends down a single video stream to all peers, and can also record that stream. An SFU is a single point of receipt of all streams, and selectively forwards them to clients. It could record off individual streams and then you could do post-processing to make a single recording out of the multiple recorded streams. A mesh network of connections really doesn't work for this use case.
I am working on a project which will involve http live media streaming from a variety of devices like android phones/tablets, iphone, ipad, browser,etc. It will be a 2 way communication for all the devices with multiple devices connected to a conversation. I have implemented it partially i.e. one way by capturing audio from android phone(native app) and streaming to a web browser(HTML5 app) with a PHP server using ffmpeg and cvlc. I wanted to know of the best way to go ahead about it. Like, if there are any standards to be followed. Also what kind of a server should I be using? I don't want to use any streaming servers like Red5. I would like to implement the streaming logic similar to Http LiveStreaming by apple. I have come across MPEG-DASH that seems to be a standard for http streaming. I still have to look deeper into it. I was also thinking of using NodeJS for its popularity with streaming. Another worry was how do I go about capturing of media from devices? As in, should I use the native capability of the devices to convert media into an mp4 or any container that it supports and then stream it to the server or capture audio and images for a particular period of time and then send it to server and create a common output(I am not really sure of this idea). The separate capture is basically for simplifying the process of video streaming from the server end to any device. I was also thinking if I could completely bypass the server in any cases like a phone to phone or phone to tablet connection.
I just wanted to be sure of the things I will be using/implementing so that I wouldn't have to make drastic changes later on. Any help is deeply appreciated. Thank you.