AC-3 and E-AC-3 passthrough with HLS - google-cast

I'm developing a web receiver for chromecast.
According to chromecast documentation, HLS with AC-3 passthrough is supported. It is to be checked with canDisplayType and following that, I assume it should be okay to use the codec.
I cannot seem to get the chromecast to accept HLS manifests with AC-3 in the codec list. I have tried this on chromecast generation gen2 and gen3 and they both reject it. canDisplayType will return true or false depending on the settings made in the Google Home app. When I have the server actually send ac-3, it throws cast.player.api.ErrorCode.MANIFEST/411, aborts and does not request anything else from the server.
I also cannot seem to get any detailed reason for this, it would be helpful if I was told in the logs why the codec presumably got rejected.
Here's a typical manifest.m3u8 file that I have experimented with:
#EXTM3U
#EXT-X-STREAM-INF:BANDWIDTH=5499178,AVERAGE-BANDWIDTH=5499178,CODECS="avc1.640028,mp4a.a5",RESOLUTION=720x576,FRAME-RATE=50
main.m3u8?<snip>&VideoCodec=h264&AudioCodec=ac3,aac,mp3&<snip>&TranscodingMaxAudioChannels=6&<snip>&h264-profile=high,main,baseline,constrainedbaseline&h264-level=41&<snip>
From my experimentation, mp4a.a5, mp4a.A5 and ac-3 all pass the canDisplayType test but fail at this stage.
For instance:
> context.canDisplayType("video/mp4", "avc1.640028,mp4a.a5", 1920, 1080, 50)
< true
I have also tried to trick the player by substituting the value with mp4a.40.2 or just pretending like there is no audio. In this case the results vary a bit but generally it requests some .ts files before it gives up, or it does not play an audio track at all. In comparison, the same media transcoded using aac-lc 2ch works fine.
Is there anything special one has to do to prepare the receiver to use passthrough audio?

Related

How to enable/disable audio on camera video stream using ONVIF

I'm really stumped here. I'm trying to use the ONVIF protocol to enable/disable audio on RTSP streams from an IP camera. If you wish to disable audio on a profile, you should first send 'removeAudioEncoderConfiguration' to remove the audio codec, then send 'removeAudioSourceConfiguration' on the same profile.
So I do this for ALL profiles on the camera (there happen to be 5). However, the RTSP stream still has audio in it.
This is happening on a Uniview camera. If I do the same sequence of messages on a Techwin device I have, then the audio is disabled.
Similarly if I try to enable audio on a profile, I do the reverse operations. Again, the Techwin adds the audio, but the Uniview does not.
The responses I received to my ONVIF messages are always 'OK'.
Any idea what I'm doing wrong and how to fix it?

How do I use DASH instead of HLS in Cloudflare video streaming?

I'm using Cloudflare as the video streaming provider for a project. I'm trying to pre-fetch multiple videos on a mobile device, so using HLS (with it's larger chunk size) is impacting performance; this is why I would like to request the video be sent using DASH. Here, the Cloudflare team writes: "Cloudflare uses two standards for adaptive streaming: HLS and MPEG-DASH".
Every get request to the video has yielded a stream with HLS. Is there any way to request DASH given my Cloudflare video id?
Typically a video origin server and CDN will serve the stream that best matches a devices capabilities - usually this triggered by the device requesting either a HLS or a MPEG DASH stream, the two most popular streaming format today.
Cloudflare Stream should provide you urls to both a HLS manifest and DASH manifest automatically - they should look something like:
MPEG-DASH: https://videodelivery.net/VIDEOID/manifest/video.mpd
HLS: https://videodelivery.net/VIDEOID/manifest/video.hls

NodeJS piping with ffmpeg

I wanted to do a HTTP live stream on a screen cast with using ffmpeg, nodejs and html5 . I wanted it to be as real time as possible. However, I find that my video received by the client was behind by 1~2 seconds (On Chrome/Chromium). I am using vp8/webm as my codec.
I have eliminated the following factors as such:
1) Network: I have tried serving and receiving the video file locally by stating the video source to be 127.0.0.1:PORT or localhost:PORT
2) ffmpeg encoding speed:I have tried outputting the file locally, it the "delay" seems to be negligible.
3) Chrome internal buffer. The buffer was accounted to be 0.07s~0.08s.
On the nodeJS side, I have a child process that runs the ffmpeg command, and did a ffmpeg.stdout.pipe(res); <-- ffmpeg is child_process.spawn(...)
So it seems that the ffmpeg.std.pipe(res) of nodejs seems to be the one delaying the video stream. Am I correct in assuming so ? Is there anyway that I may reduce the delay ?
Go to WebRTC no need to implement any thing like codec,pipe,etc(already in chrome,opera,firefox)
Uses:
MediaCaptureAPI(access your cam and mic and convert object to URL, default they are using vp8 codec,etc)
RTCPeerconnectionAPI(send and receive media stream p2p)
RTCDatachannelAPI(send and receive data using p2p)

Rendering Audio Stream from RTSP server

i have an RTSP server which is re-streaming the A/V stream from the camera to clients.
Client side we are using MF to play the stream.
I can successfully play the video but not able to play the audio from the server. However when i use vlc to play, it can play both A/V.
Currently i am implementing IMFMediaStream and have created my customize media stream. I have also created a separate IMFStreamDescriptor for audio and added all the required attributes. When i run , everything goes fine but my RequestSample method never gets called.
Please let me know if i am doing it wrong or if there is any other way to play the audio in MF.
Thanks,
Prateek
Media Foundation support for RTSP is limited to a small number of payload formats. VLC supports more (AFAIR through Live555 library). Most likely, your payload is not supported in Media Foundation.

WP7 audio stream problem

I'm using MediaElement to play audio mp3 stream,
everything goes ok, but now I have mp3 stream that does not end with .mp3,
( http://server2.fmstreams.com:8011/spin103) and I'm getting
AG_E_NETWORK_ERROR
I found solution to add ?ext=mp3, but it didn't work for me, any ideas?
If you are streaming live radio, the stream may be encoded by an IceCast server or ShoutCast server. To read these streams, you will need to decode the stream in memomry and pass it to the MediaElement once it has been decoded.
have a look at Mp3MediaStreamSource : http://archive.msdn.microsoft.com/ManagedMediaHelpers
and Audio output from Silverlight
I lost tons of time on this, and this is the best solution I found so far.
You also have to be sure that while you are testing, the device must be unplugged from the computer.

Resources