ChromeCast TTML Closed Captioning with Smooth Streaming and PlayReady - google-cast

Hy!
I would like to create an application which supports TTML typed closed captions.
My ism/manifest file contains the TTML based closed caption, I would like to ask how can I use it?
I found this site,
https://developers.google.com/cast/docs/player
where they described the following:
Segmented TTML & WebVTT
Use Segmented TTML for Smooth Streaming and WebVTT - Web Video Text Tracks for HLS.
To enable:
protocol_.enableStream(streamIndex, true);
player_.enableCaptions(true);
But I can't find an example for my problem. Do I have to enable this after creating my host at the receiver side?
Are there any sample app for this?
UPDATE #1
Here's my code:
window.onload = function() {
var mediaElement = document.getElementById('video'); //video is a html video tag
var mediamanager = new cast.receiver.MediaManager(mediaElement);
var url = "http://playready.directtaps.net/smoothstreaming/SSWSS720H264/SuperSpeedway_720.ism/Manifest"; //Just a sample URL
var host = new cast.player.api.Host({ 'mediaElement': mediaElement, 'url': url });
window.player = new cast.player.api.Player(host);
protocol = cast.player.api.CreateSmoothStreamingProtocol(host);
var initStart = 0;
window.player.load(protocol, initStart);
mediamanager.loadedmetadata = function(loadinfo) {
//onMetadataLoaded fired, set the caption
}
}
It doesn't work. So I decided to get the streams:
var streamCount = protocol.getStreamCount();
And streamCount contains 0. The manifest contains the closed caption, should I use something else, not the getStreamCount()?
Thank you very much!

The correct approach is to listen for metadataloaded event. Once that event is fired, then you are good to get the stream count, but don't do that before that event is fired. Then you can enable the stream for the index that you want (for the language that you want, in case you have multiple ones) and then enable caption. If you want to change language, you first need to disable caption and then select a different stream index and then enable it again.

No samples yet.
Yes - you should enable after creating the Host and starting playback.

Related

How to make safari play an web audio api buffer source node?

I have a code that load a audio file with fetch and decode to an audiobuffer and then i create a bufersourcenode in web audio api to receive that audio buffer and play it when i press a button in my web page.
In chrome my code works fine. But in safari ... no sound.
Reading web audio api related questions with safari some people say that web audio api need to receive input from user in order to play sound.
In my case i have a button to be tapped in order to play the sound, a user input already. But it is not working.
I found an answer that tells the web audio api decodeAudiodata do not work with promises in safari and it must use an old syntax. I have tried the way the answer treat the decodeAudiodata but still no sound....
Please somebody can help me here? thanks for any help!
<button ontouchstart="bPlay1()">Button to play sound</button>
window.AudioContext = window.AudioContext || window.webkitAudioContext;
const ctx = new AudioContext();
let au1;
window.fetch("./sons/BumboSub.ogg")
.then(response => response.arrayBuffer())
.then(arrayBuffer => ctx.decodeAudioData(arrayBuffer,
audioBuffer => {
au1 = audioBuffer;
return au1;
},
error =>
console.error(error)
));
function bPlay1(){
ctx.resume();
bot = "Botão 1";
var playSound1b = ctx.createBufferSource();
var vb1 = document.getElementById('sld1').value;
playSound1b.buffer = au1;
var gain1b = ctx.createGain();
playSound1b.connect(gain1b);
gain1b.connect(ctx.destination);
gain1b.connect(dest);
gain1b.gain.value = vb1;
console.log(au1); ///shows in console!
console.log(playSound1b); ///shows in console!
playSound1b.start(ctx.currentTime);
}

Allowing for range requests in Service Stack

Recently we have decided to play some video in browser at my company. We want to support Safari, Firefox and Chrome. To stream video, Safari requires that we implement range http requests in servicestack. Our server supports range requests as indicated by the 'Accept-Ranges: bytes' header being returned in the response.
Looking at previous questions we would want to add a prerequest filter, but I don't understand the details of doing so. Adding this to our AppHost.cs's configure function does do something:
PreRequestFilters.Add((req, res) => {
if (req.GetHeader(HttpHeaders.Range) != null) {
var rangeHeader = req.GetHeader(HttpHeaders.Range);
rangeHeader.ExtractHttpRanges(req.ContentLength, out var rangeStart, out var rangeEnd);
if (rangeEnd > req.ContentLength - 1) {
rangeEnd = req.ContentLength - 1;
}
res.AddHttpRangeResponseHeaders(rangeStart, rangeEnd, req.ContentLength);
}
});
Setting a breakpoint I can see that this code is hit. However rangeEnd always equals -1, and ContentLength always equals 0. A rangeEnd of -1 is invalid as per spec, so something is wrong. Most importantly, adding this code breaks video playback in Chrome as well. I'm not sure I'm on the right track. It does not break the loading of pictures.
If you would like the details of the Response/Request headers via the network let me know.

Live Audio HLS stream fails to play

we are trying to play a HLS Live stream that is Audio-only.
It looks ok spec-wise and we're able to play it on all browsers and native player that we have, but it fails to play on Chromecast.
Url: http://rcavliveaudio.akamaized.net/hls/live/2006635/P-2QMTL0_MTL/playlist.m3u8
Content-Type: vnd.apple.mpegURL
Steps to reproduce
Force this content url and content type into the Chromecast player.
Expected
To hear audio playing like on any other player we try.
Actual result
There is no playback. The master playlist is fetched, the chunk playlist is fetched and the first chunks are fetched, but there is no playback. It stops after a few chunk.
The player is stuck in the "processing segment" phase, and it stops.
Please change the Content type to audio/mp4 and set AAC to segment format mediaInfo.hlsSegmentFormat = cast.framework.messages.HlsSegmentFormat.AAC;
Using Anjaneesh's comment, here's how I ended up solving this. On the receiver's JavaScript:
const instance = cast.framework.CastReceiverContext.getInstance();
const options = new cast.framework.CastReceiverOptions();
options.disableIdleTimeout = true;
options.supportedCommands = cast.framework.messages.Command.ALL_BASIC_MEDIA;
instance.start(options);
const playerManager = instance.getPlayerManager();
playerManager.setMessageInterceptor(cast.framework.messages.MessageType.LOAD, (req) => {
req.media.hlsSegmentFormat = cast.framework.messages.HlsSegmentFormat.TS_AAC;
req.media.streamType = cast.framework.messages.StreamType.LIVE;
return req;
});
The key is setting the message interceptor callback for the LOAD event/message. There, you can override hlsSegmentFormat from the client. In my case, I needed to indicate that my segments were in TS format.
I'm not entirely sure why this is necessary. It isn't necessary when there is a video track... only when video is missing.

Using getUserMedia to get audio from the microphone as it is being recorded?

I'm trying to write a Meteor.JS app that uses peer to peer "radio" communication. When a user presses a button it broadcasts their microphone output to people.
I have some code that gets permission to record audio, and it successfully gets a MediaStream object, but I can't figure out how to get the data from the MediaStream object as it is being recorded.
I see there is an method defined somewhere for getting all of the tracks of the recorded audio. I'm sure I could find a way to write some kind of loop that notifies me when audio has been added, but it seems like there should be a native, event-driven way to retrieve the audio from getUserMedia. Am I missing something? Thanks
What you will want to do is to access the stream through the AudioAPI(for the recording part). This is after assigning a var to your stream that was grabbed through getUserMedia (I call it localStream). So, you can create as many MediaStreamsource nodes as you want from one stream, so you can record it WHILE sending it to numerous people through different rtcpeerconnections.
var audioContext = new webkitAudioContext() || AudioContext();
var source = audioContext.createMediastreamSource(localStream);
var AudioRecorder = function (source) {
var recording = false;
var worker = new Worker(WORKER_PATH);
var config = {};
var bufferLen = 4096;
this.context = source.context;
this.node = (this.context.createScriptProcessor ||
this.context.createJavaScriptNode).call(this.context,
bufferLen, 2, 2);
this.node.onaudioprocess = function (e) {
var sample = e.inputBuffer.getChannelData(0);
//do what you want with the audio Sample, push to a blob or send over a WebSocket
}
source.connect(this.node);
this.node.connect(this.context.destination);
};
Here is a version I wrote/modified to send audio over websockets for recording on a server.
For sending the audio only when it is available, you COULD use websockets or a webrtc peerconnection.
You will grab the stream through getUserMedia success object(you should have a global variable that will be the stream for all your connection). And when it becomes available, you can use a signalling server to forward the requesting SDPs to the audio supplier. You can set it the requesting SDPs to receive only and your connection.
PeerConnection example 1
PeerConnection example 2
Try with code like this:
navigator.webkitGetUserMedia({audio: true, video: false},
function(stream) { // Success Callback
var audioElement = document.createElement("audio");
document.body.appendChild(audioElement);
audioElement.src = URL.createObjectURL(stream);
audioElement.play();
}, function () { // Error callback
console.log("error")
});
You may use the stream from success callback to create a object URL and pass it into an HTML5 audio element.
Fiddle around in http://jsfiddle.net/veritas/2B9Pq/

Spotify API disable next / previous controls

Is there a way to disable next / previous track controls in the player with the new API, like Soundrop does?
I have come across such questions, there are a couple of suggestions like playing a single track or playing a context with only one track in it, or using setContextCanSkipPrev methods on the player, but none work for the new API. I need a solution for the API version 1.0.0
You need to create a temporary playlist with a single track and use the enforceRules function:
require(['$api/models'], function(models) {
var tempName = 'temp' + (new Date()).getTime();
models.Playlist.createTemporary(tempName).done(function(playlist) {
playlist.enforceRules('stream');
playlist.load("tracks").done(function(loadedPlaylist) {
var track = models.Track.fromURI('spotify:track:7B1Dl3tXqySkB8OPEwVvSu');
loadedPlaylist.tracks.add(track);
models.player.playContext(loadedPlaylist, 0);
});
});
});
At the moment, it seems the documentation for the API is missing the description of this function.

Resources