How to broadcast the audio of a web page to an Icecast server? - audio

I'd like to broadcast the audio provided by a web page which, in my case, contains YouTube videos synced with all the other people on this page at the same time.
I'd like to do that with a Debian 8 server without any graphical interface or sound card.
First, I thought I could manage that with Liquidsoap but I didn't find any way to deal with it.
I didn't find anyone trying to do the same thing as me while searching on Google.
Does anyone have an idea?
Thanks.

Related

Stream music from streaming platform (Deezer, Spotify, Soundcloud) to Web Audio API

Do any of you, know a way to get the audio stream of a music platform and plug it to the Web Audio API ?
I am doing a music visualizer based on the Web Audio API. It currently reads sounds from the mic of my computer and process a real-time visualization. If I play music loud enough, my viz works !
But now I'd like to move on and only read the sound coming from my computer, so that the visualization render only to the music and no other sound such as people chatting.
I know I can buffer MP3 file in that API and it would work perfectly. But in 2020, streaming music is very common, via Deezer, Spotify, Souncloud etc.
I know they all have an API but they often offer an SDK where you cannot really do more than "play" music. There is no easy access to the stream of audio data. Maybe I am wrong and that is why I ask your help.
Thanks
The way to stream music to WebAudio is to use a MediaElementAudioSourceNode or MediaStreamAudioSourceNode. However, these nodes will output zero unless you're allowed to access the data. This means you have to set the CORS property correctly on your end and also requires the server to allow the access through CORS.
A google search will help with setting up CORS. But many sites won't allow access unless you have the right permissions. Then you are out of luck.
I find a "no-code" work around. At least on Ubuntu 18.04, I am able to tell Firefox to take my speakers as the "microphone input".
You just have to select the good "mic" in the list when your browser asks for mic permission.
That solution is very convenient since I do not need to write platform-specific binding-code to access to the audio stream

Capture everything on computer screen and audio. What technologies/apis are involved?

I use Skype (macOS native app) and Google Hangouts (web) a lot for work. I will speak with clients using these apps while my headphones are plugged in to prevent others in the office from hearing both sides of the conversations.
I would like to find a way to record the video and audio from these conversations. Can I record audio coming in from Skype if I create a web app? I don't think WebRTC can solve this question.
Apologies for the vagueness and open-endedness of this question but searching online only returns app specific answers.
I would like to record the audio and video with Javascript on my side of the conversation. Am open to a webapp, or a chrome/ff plugin, or something else I'm not familiar with.
Thank you
I've had nothing but great experiences with loom (and it's free!). https://www.useloom.com/

Nodejs Stream User's Webcam

I want to make an Web app which streams user's webcam which broadcast to viewers
like one to many!
I know getuserMedia() will help me to get user's webcam. now how to stream this data with audio.
I google about this i get few result like using WebRTC and peerjs can do this but I need some kick-start guide like some code or documentation!
Well, you can try easyrtc, is a node.js library for giving WebRTC support to your apps. Is open source and packs some cool demos for you to start making awesome stuff.

video chat. red5 faster/needed?? why not just p2p?

Pardon my ignorance, but I am researching making a video chatroom, and what I am finding just seems really counter intuitive to me. From what I have read, it sounds like the standard is for each user to stream their video to a media server, like red5, and then the server sends the stream to the other person. Intuitively it seems like this just adds a middle man that would add lag to the video streaming because it has to go to a server, then turn around and go to a person, rather then just directly to a person. Why not just p2p with something like adobe status/Cirrus? Just use the service to get the other users ip, and then stream them your video directly? Yet, it seems like almost everyone uses an FMS like red5..
What am I failing to understand here? What is the advantage of having this "middle man"?
It would require lots of bandwidth (download speeds may be high enough but uploads are usually low) to send the video to the viewers. NAT makes it difficult to connect to a specific computer (from the public side there is only one IP for the computers under the router).

Setup a VPS as a streaming replicator (for an online radio)?

I have a real radio station, and we stream the radio over the internet through our website. However the radio has been growing and our internet link can't support the amount of players anymore.
My idea was to get a VPS, then stream the radio from our office to the VPS, and then have the VPS stream to the listeners on our website.
Could someone suggest me a way to do that?
I know it is possible, but I don't know where to start.
This is pretty easy, and is the normal way to get it done. Set up a SHOUTcast or Icecast server on this VPS, and have it either relay your existing stream on your DSL connection, or connect your encoder directly to SHOUTcast/Icecast on the VPS.
Alternatively, look for SHOUTcast/Icecast hosting if you aren't comfortable setting this up yourself. I'm doing some free experimental hosting for small stations at the moment. E-mail preview#audiopump.co if you're interested.

Resources