I have coded a videoboard. Like a soundboard but with video. You go to one URL that's just a black screen and another one which has a list of different videos (sender). When you click one of these videos it plays on the black screen (receiver). If you play 2 different videos at the same time both videos are shown next to each other on the receiver. That's working fine for several months now. It just creates multiple html video-elements with multiple source-tags (x265 mp4 and vp9 webm).
I recently made a discord bot which takes the webm, extracts the opus stream and plays its sound in the voice channel where the bot is connected. This has one disadvantage: It can only play one sound at a time. It happens a lot that there are multiple videos/sounds playing at the same time so this a bit of a bummer.
So I thought I should create a audiostream on the server which hosts the videoboard and just connect the bot to that stream. But I have no clue how to do this. All I know is that it's very likely going to involve ffmpeg.
What would be the best option here? What I think I would need is basically an infinite silence stream and the possibility to add a audio file onto that stream at any point which will play simultaneously with other audio files that were added before and have not ended payback yet. How is that possible? Somehow with m3u8 playlist-files or via rtsp protocol?
Thanks :)
I think it can be helpful for you https://bitbucket.org/kaleniuk_ihor/neuro_vision/src/db_watch/
Also this library was very useful for me https://github.com/kyriesent/node-rtsp-stream you can just install npm i node-rtsp-stream
Related
I have been trying to find a way to play live stream of audio (mp3) using Google Actions but haven't found a way to do so.
I tried Media Response as well but as mentioned in the documentation it doesn't support live stream.
I followed this thread but it doesn't have any examples to help me with.
Is it possible to play live mp3 stream using Google Actions?
I've had relatively good results with the Media Player being able to handle mp3 "streams". There are a couple of problems doing this, however:
There is a time limit on the audio playback (4 hours last time I checked, but it may have changed).
There isn't any such thing as an mp3 "stream". The player treats it as a single mp3 file that it downloads in chunks using HTTP headers, unlike some of the streaming protocols that allow for varying bitrate based on network and other conditions.
If this is an issue, one alternative might be to use the Interactive Canvas (which uses Chrome on the device) to present an HTML page that has an <audio> tag in it that you control. This gives you a little more control (most streaming protocols are either supported or have JavaScript libraries that can do the work), but there are some downsides:
This will only work on Smart Displays and Android. Smart Speakers aren't supported.
Interactive Canvas is only allowed for certain types of Actions. Currently it must be a game, a story, or an educational Action.
Intro:
I want to play a youtube video clip and be able to define its states during the session (to sync between users). I want that the youtube video will be played on the current chosen devices (webrtc app). E.g - I can choose specific audio output for the app from 3 that I have.
The problem that I have:
I am trying to get the youtube video audio in order to sink the audio to the relevant audio output device that I have. Currently, When I am playing the youtube video, the audio is being played through the current default audio output device and not by the chosen one on my app (I have the selected device id saved).
What I actually want to achieve:
I want to play the youtube player and hear the video audio track with the chosen audio output device (aka chosen speaker) and not by the default one.
What I am using
Currently using React-Player with my addons.
I am using React and Node
Again:
The problem here is that the video will be played on the default audio output of each client (cannot attach it to a specific one)
setSindId is not reachable
Ideas:
Take the video element and get the audio track - not possible with iframe
using youtube API for it - never seen an option
Have some ideas regarding saving it as mp3 and serve the audio + doing sync footwork, I prefer not to.
Please let me know if you have an idea.
Thanks!
I found out the other day that discord bots had the ability to play audio in stereo which is not possible with a regular discord account. Maybe it could be possible to stream Ableton Live's audio output to a node.js server for a bot to play back in a Discord channel.
I found this plugin : https://listento.audiomovers.com/ which is a good starting point.
This page shows exemples of audio playback code but not live streaming methods https://discord.js.org/#/docs/main/stable/class/PlayInterface?scrollTo=play
The idea is to live stream audio without the delay that could be caused by video with a soft like OBS. And Discord would be a great platform for this as people would be able to react and make music together.
I need help with the structure of all this. Do you think this is possible ?
Have a look at https://rogueamoeba.com/audiohijack/
and it's free alternative https://github.com/mattingalls/Soundflower/releases
If you run your discord bot locally then you can just set Ableton's output to be an input for the bot using one of the above.
I agree, You can use Asio Link Pro Tool The developer gives it out for free and I use it all the time, especially when chaining audio interfaces together to get more inputs. Works great, you can check out a video with Mr. Different TV on Youtube to understand it.
https://give.academy/downloads/2018/03/03/ODeusASIOLinkPro/
https://www.youtube.com/watch?v=emRZxa0pqbs
Unlimited inputs and outputs (>:o)
I'd like to be able to capture the audio from the audio card of my computer and to dispatch it with WebRTC. However, I am not sure if it's possible or not to have access to the audio directly produced by my computer.
According to this repo https://github.com/niklasenbom/RecordingApp/blob/master/app.js there is a system audio stuff but not sure if it's what I'm looking for.
Thanks,
You can do it by using NAudio. Actually I did the same project myself and will put it in GitHub in a few weeks and update this answer. You can configure the frequency etc. and use it's OnDataAvailable event to dispatch the sound to registered clients.
I am working on a project for large group broadcasting in WebRTC since it needs to work on iOS and Android devices, I am using Kurento, and iOSWEBRTC cordvoa plugin to build this I am curious if anyone can help improve my plan, or if there is a easier way to achieve this.
We need to have a video/audio conference with 5 people per room, however we need to be able to show that video to large audiences. Now my idea would be use Kurento as a middle-man and capture the streams into .webm files for live playback as the conference is going on.
Is there a better way to achieve this? And how would I playback the webm file as it is being recorded, it needs to update and continue playing as more video is sent, basically a live stream copy of the camera.
I am unsure if I am going the best route but I figured that would reduce the bandwidth from my original idea, I originally was thinking of making it like this:
5 person conference for broadcasters X number of viewers then downloaded those streams however I realize the upload bandwidth requirement would be crazy high, that is why I settled on this idea. Additionally the viewers do not have to see real time like the broadcasters. They need to be able to see and communicate with each other at the same time and the viewers can be a few seconds behind.
TL;DR:
Trying to make a 5 person video conference with video/audio capturing to then live stream it to viewers players. This would allow avoiding of PeerConnection bandwidth limitations. Would this work or am I forgetting something?
You'll need to look into using an SFU or MCU. An MCU is very costly, but multiplexes video streams and sends down a single video stream to all peers, and can also record that stream. An SFU is a single point of receipt of all streams, and selectively forwards them to clients. It could record off individual streams and then you could do post-processing to make a single recording out of the multiple recorded streams. A mesh network of connections really doesn't work for this use case.