Does Chromecast single video stream restriction apply to Audio objects? - google-cast

The Receiver application guidelines state that "Chromecast devices only support 1 concurrent media stream for playback" then goes on to discuss it only in terms of video. I find that just creating a single Audio object like this...
new Audio("/sound/beep1.mp3");
...will prevent any subsequent video from playing. Is this expected behavior?

Chromecast only supports one active media element at a time, so if you have an Audio element/object, then you cannot have another Video element/object.

Related

when video or audio is played from a uri is it streamed or downloaded fully and played?

I have a content creation site I am building and im confused on audio and video.
If I have a content creators audio or video stored in s3 and then I want to display their file will the html video player or audio player stream the media or will it download it fully then play it?
I ask because what if the video or audio is significantly long. like 2 hours for example. I need to know how to solve the use case.
Lastly what file type is most acceptable for viewing on webpages? It seems like MPEG-4 is the best bet. Is that true?
Most video player clients and browsers will attempt to stream the video if they can.
For an mp4 video file hosted on a server, so long as the header is at the start and the server accepts range requests, this will mean the player downloads the video in chunks and starts playing as soon as it has enough to decide the first frames.
For more professional streaming services, they will generally use an adaptive bit rate streaming protocol like DASH or HLS (see this answer: https://stackoverflow.com/a/42365034/334402) and again the video will be streamed in chunks, or segments, and will start playing while it is streaming.
To answer your last question you need to be aware that the raw video is encoded (e.g. h.264, VP9 etc) and the video, audio, subtitle etc tracks stored in a video container (e.g. mp4, Web etc).
The most common format is probaly h.264 encoded and mp4 containers at this time.
The particular profile for h.264 can matter also depending on the device - baseline is probably the most supported profile at this time. You can find examples of media support for different devices online, e.g. for Android: https://developer.android.com/guide/topics/media/media-formats
#Mick's answer is spot on. I'll just add that mp4 (with h264 encoding) will work in just about every browser out there.
The issue with mp4 files (especially with a 2 hour long movie) isn't so much the seeking & streaming. If your creator creates a 4K video - thats what you'll deliver to everyone (even mobile phones). HLS streaming on the other hand has adaptive bitrates - where the video adapts to both the screen & the available network speeds. You'll get better playback results with less buffering (and if you're using AWS - a LOT LESS data egress) with video streaming.
(there are a bunch of APIs and services that can help you do this - including api.video (where I work), Mux and others).

How to play live audio stream using Google Actions Dialogflow

I have been trying to find a way to play live stream of audio (mp3) using Google Actions but haven't found a way to do so.
I tried Media Response as well but as mentioned in the documentation it doesn't support live stream.
I followed this thread but it doesn't have any examples to help me with.
Is it possible to play live mp3 stream using Google Actions?
I've had relatively good results with the Media Player being able to handle mp3 "streams". There are a couple of problems doing this, however:
There is a time limit on the audio playback (4 hours last time I checked, but it may have changed).
There isn't any such thing as an mp3 "stream". The player treats it as a single mp3 file that it downloads in chunks using HTTP headers, unlike some of the streaming protocols that allow for varying bitrate based on network and other conditions.
If this is an issue, one alternative might be to use the Interactive Canvas (which uses Chrome on the device) to present an HTML page that has an <audio> tag in it that you control. This gives you a little more control (most streaming protocols are either supported or have JavaScript libraries that can do the work), but there are some downsides:
This will only work on Smart Displays and Android. Smart Speakers aren't supported.
Interactive Canvas is only allowed for certain types of Actions. Currently it must be a game, a story, or an educational Action.

Take the audio of the youtube video element

Intro:
I want to play a youtube video clip and be able to define its states during the session (to sync between users). I want that the youtube video will be played on the current chosen devices (webrtc app). E.g - I can choose specific audio output for the app from 3 that I have.
The problem that I have:
I am trying to get the youtube video audio in order to sink the audio to the relevant audio output device that I have. Currently, When I am playing the youtube video, the audio is being played through the current default audio output device and not by the chosen one on my app (I have the selected device id saved).
What I actually want to achieve:
I want to play the youtube player and hear the video audio track with the chosen audio output device (aka chosen speaker) and not by the default one.
What I am using
Currently using React-Player with my addons.
I am using React and Node
Again:
The problem here is that the video will be played on the default audio output of each client (cannot attach it to a specific one)
setSindId is not reachable
Ideas:
Take the video element and get the audio track - not possible with iframe
using youtube API for it - never seen an option
Have some ideas regarding saving it as mp3 and serve the audio + doing sync footwork, I prefer not to.
Please let me know if you have an idea.
Thanks!

Audio Streaming Using J2ME

I've got audio online in the form of MP3 files, how do I stream the audio from my J2ME app? A website give the app a list of audio to play, select the audio and it must then stream from the website.
Sample code would be nice. thanks
There is no reliable way to ensure that a MIDlet will stream audio data because you don't control how the phone manufacturer implemented JSR-135 (the specification that gives you the API to play audio in a MIDlet).
Technically, creating a Java media player using javax.microedition.media.Manager.createPlayer(String aUrl) should make the JSR-135 implementation stream the audio data located at the url.
Unfortunately, only streaming of very simple audio content (wav more often than mp3), if any, is usually supported over a network connection and, more often than not, a call to createPlayer(String aUrl) will throw an exception if the url doesn't begin with "file://"
There are probably devices where the manufacturer managed to plug a more complete audio/networking module into the JSR-135 implementation but finding them will require a lot of testing for you.
J2ME won't let you do this over HTTP. It will download the entire audio before it starts playback. What you need is to host it on an RTP server instead; only then will J2ME stream the audio.
If that's no good, then you might be stuck looking for devices that have their own proprietary libraries for this kind of thing.
There's a better way to do this.
Create your own InputStream extended class. say MyHTTPInputStream, implement all the methods. Run a thread to retrieve the data from HTTP and store it in buffer, when the Player class calls InputStream.read() method, provide the data from the buffer.
Before implementing this class for Player, test the MyHTTPInputStream using a dummy WAV file stored in phone memory or add-on card. So, you can know which methods are called from InputStream and also you can know the sequence of method calls made by the class Player.

Playing multiple audio streams simultaneously from one audio file

I have written an application that receives media files from a central server and plays those files according to a playlist. All works well.
A client has contacted us and wants to use our application to play some audio files as presentations in a kiosk-style application. So far, so good, our application can handle this no problems.
He has requested as a potential feature that we would have a number of headphone sockets at the front of the kiosk. Each headphone socket would play the same audio presentation in a different language.
I have come up with the idea of encoding a single audio file with the presentation in multiple languages, and each language in a different channel. We would then require a sound card that could decode each channel and output it on a different headphone socket.
Thing is, while I'm think the theory is sound, I have absolutely no idea whether this is feasible and what would be required to pull it off.
Any ideas?!
As a side-note: the application uses Media Player as the underlying component to handle the playback of audio and video. I'd appreciate any help as to the software we could use to generate the multi-channel audio stream and the hardware (USB sound card would be fine) that we could use to decode the stream.
Thanks!
You need to use multiple files not channels, its going to be way easier that way.
Instead of using Media Player use DirectShow (on .NET you have DirectShow.NET), In DirectShow you have the notation of Multiple files on the same graph.
You will be able to control to which audio device play which files, and your Play, Pause, Stop commands will be preformed on all files without you need to worry about syncing.
There are many samples on how to build media player like with DiectShow, extending them to use multiple files should be really easy.
For HW take a look at this (USB with 8 output channels)
I think with Shay's hardware you've got a complete solution:
Encode a 7.1 file with a different mono voice track on each channel.
Use the 8 channel output device in 7.1 mode, with a different headset in each port, and you've got it. Or, if you only have 6 languages, a 5.1 file would work. Many PC's have 5.1 outputs built in, you'd only need 3 splitters to break out the left and right channels from each jack.
You can do the encoding with Windows Media Encoder, or other pro audio tool.

Resources