Intro:
I want to play a youtube video clip and be able to define its states during the session (to sync between users). I want that the youtube video will be played on the current chosen devices (webrtc app). E.g - I can choose specific audio output for the app from 3 that I have.
The problem that I have:
I am trying to get the youtube video audio in order to sink the audio to the relevant audio output device that I have. Currently, When I am playing the youtube video, the audio is being played through the current default audio output device and not by the chosen one on my app (I have the selected device id saved).
What I actually want to achieve:
I want to play the youtube player and hear the video audio track with the chosen audio output device (aka chosen speaker) and not by the default one.
What I am using
Currently using React-Player with my addons.
I am using React and Node
Again:
The problem here is that the video will be played on the default audio output of each client (cannot attach it to a specific one)
setSindId is not reachable
Ideas:
Take the video element and get the audio track - not possible with iframe
using youtube API for it - never seen an option
Have some ideas regarding saving it as mp3 and serve the audio + doing sync footwork, I prefer not to.
Please let me know if you have an idea.
Thanks!
Related
I have a content creation site I am building and im confused on audio and video.
If I have a content creators audio or video stored in s3 and then I want to display their file will the html video player or audio player stream the media or will it download it fully then play it?
I ask because what if the video or audio is significantly long. like 2 hours for example. I need to know how to solve the use case.
Lastly what file type is most acceptable for viewing on webpages? It seems like MPEG-4 is the best bet. Is that true?
Most video player clients and browsers will attempt to stream the video if they can.
For an mp4 video file hosted on a server, so long as the header is at the start and the server accepts range requests, this will mean the player downloads the video in chunks and starts playing as soon as it has enough to decide the first frames.
For more professional streaming services, they will generally use an adaptive bit rate streaming protocol like DASH or HLS (see this answer: https://stackoverflow.com/a/42365034/334402) and again the video will be streamed in chunks, or segments, and will start playing while it is streaming.
To answer your last question you need to be aware that the raw video is encoded (e.g. h.264, VP9 etc) and the video, audio, subtitle etc tracks stored in a video container (e.g. mp4, Web etc).
The most common format is probaly h.264 encoded and mp4 containers at this time.
The particular profile for h.264 can matter also depending on the device - baseline is probably the most supported profile at this time. You can find examples of media support for different devices online, e.g. for Android: https://developer.android.com/guide/topics/media/media-formats
#Mick's answer is spot on. I'll just add that mp4 (with h264 encoding) will work in just about every browser out there.
The issue with mp4 files (especially with a 2 hour long movie) isn't so much the seeking & streaming. If your creator creates a 4K video - thats what you'll deliver to everyone (even mobile phones). HLS streaming on the other hand has adaptive bitrates - where the video adapts to both the screen & the available network speeds. You'll get better playback results with less buffering (and if you're using AWS - a LOT LESS data egress) with video streaming.
(there are a bunch of APIs and services that can help you do this - including api.video (where I work), Mux and others).
This is a question on execution.
on video chat creation. Each user gets a div created for them which is just a black picture and their name.
when they click the start video button in my ui a localVideoTrack is created and published to all subscribers. The code then appends that video track to the UI
But what about when I want a audio only track? But I don't want any video?
What or I want audio and video but then want to mute the audio?
My thought is this.
you create a new local video track and either enable audio or video or both. When you want to change the state of a video track. Like turn off audio, you just create a local track again without audio publish it remove the current video track from the ui and replace it with the new one.
Or I could just use video and audio tracks but I don't know if that is the right move.
input would be appreciate!
Twilio developer evangelist here.
Video tracks and audio tracks are different. Video is only concerned with the camera and visual of a participant. Audio is only concerned with the microphone and the sound of a participant. So when you create a new video track, it should only ask for access to the camera and only publish a single video track. When you create a new audio track, it should only ask for access to a microphone and only publish a single audio track. When you create local tracks, or connect to a room and try to publish both audio and video, then permission is asked for both camera and microphone access, and two tracks, one for video and one for audio, are published.
At any stage after your participant connects to a Twilio Video room you can then publish new video/audio tracks to add new tracks to the participant. You can also unpublish those tracks, to completely remove them from the participant.
Once a track is published, you can then disable/enable the track, which is muting the audio/video without unpublishing it from the room. This is a quicker process than publishing/unpublishing.
My goal is to be able to write sheet music in Musescore and then have the audio output of the playback routed to Ableton Live.
I've tried using loopMIDI audio and LoopBe1 as virtual midi cables.
I have the Jack audio driver set in Ableton's audio preferences under ASIO drivers. As seen in the photo, it seems that Ableton is recognizing the virtual midi cables as an input. I have Musescore's Jack audio settings enabled. I have a midi instrument set up in Ableton. However, when I play back audio in Musescore Ableton doesn't seem to be recognizing any input.
I was trying to follow along with this tutorial. However, they seemed to omit certain details. For example, as seen in my image I was only able to route general sound/midi devices together not specific [left1,right1] to another [in1,in2]
I have coded a videoboard. Like a soundboard but with video. You go to one URL that's just a black screen and another one which has a list of different videos (sender). When you click one of these videos it plays on the black screen (receiver). If you play 2 different videos at the same time both videos are shown next to each other on the receiver. That's working fine for several months now. It just creates multiple html video-elements with multiple source-tags (x265 mp4 and vp9 webm).
I recently made a discord bot which takes the webm, extracts the opus stream and plays its sound in the voice channel where the bot is connected. This has one disadvantage: It can only play one sound at a time. It happens a lot that there are multiple videos/sounds playing at the same time so this a bit of a bummer.
So I thought I should create a audiostream on the server which hosts the videoboard and just connect the bot to that stream. But I have no clue how to do this. All I know is that it's very likely going to involve ffmpeg.
What would be the best option here? What I think I would need is basically an infinite silence stream and the possibility to add a audio file onto that stream at any point which will play simultaneously with other audio files that were added before and have not ended payback yet. How is that possible? Somehow with m3u8 playlist-files or via rtsp protocol?
Thanks :)
I think it can be helpful for you https://bitbucket.org/kaleniuk_ihor/neuro_vision/src/db_watch/
Also this library was very useful for me https://github.com/kyriesent/node-rtsp-stream you can just install npm i node-rtsp-stream
I want to create an App that can play a live camera feed on nokia devices.
I created a sample app as described here:
http://www.developer.nokia.com/Community/Wiki/How_to_play_video_streaming_in_Java_ME
Using this I am able to play youtube or file based rtsp streams but not direct camera feeds.
Further details
IP camera is sending live camera captures as feed in RTSP format
MPEG4 format
It is possible to play this feed on RealPlayer and VLC in desktop systems
Not able to play this feed using RealPlayer on Mobile.
How do I create an App that can play live feed.
It is also ok for me if i can get some existing media players which are capable of playing live feeds.