we are trying to play a HLS Live stream that is Audio-only.
It looks ok spec-wise and we're able to play it on all browsers and native player that we have, but it fails to play on Chromecast.
Url: http://rcavliveaudio.akamaized.net/hls/live/2006635/P-2QMTL0_MTL/playlist.m3u8
Content-Type: vnd.apple.mpegURL
Steps to reproduce
Force this content url and content type into the Chromecast player.
Expected
To hear audio playing like on any other player we try.
Actual result
There is no playback. The master playlist is fetched, the chunk playlist is fetched and the first chunks are fetched, but there is no playback. It stops after a few chunk.
The player is stuck in the "processing segment" phase, and it stops.
Please change the Content type to audio/mp4 and set AAC to segment format mediaInfo.hlsSegmentFormat = cast.framework.messages.HlsSegmentFormat.AAC;
Using Anjaneesh's comment, here's how I ended up solving this. On the receiver's JavaScript:
const instance = cast.framework.CastReceiverContext.getInstance();
const options = new cast.framework.CastReceiverOptions();
options.disableIdleTimeout = true;
options.supportedCommands = cast.framework.messages.Command.ALL_BASIC_MEDIA;
instance.start(options);
const playerManager = instance.getPlayerManager();
playerManager.setMessageInterceptor(cast.framework.messages.MessageType.LOAD, (req) => {
req.media.hlsSegmentFormat = cast.framework.messages.HlsSegmentFormat.TS_AAC;
req.media.streamType = cast.framework.messages.StreamType.LIVE;
return req;
});
The key is setting the message interceptor callback for the LOAD event/message. There, you can override hlsSegmentFormat from the client. In my case, I needed to indicate that my segments were in TS format.
I'm not entirely sure why this is necessary. It isn't necessary when there is a video track... only when video is missing.
Related
I have a code that load a audio file with fetch and decode to an audiobuffer and then i create a bufersourcenode in web audio api to receive that audio buffer and play it when i press a button in my web page.
In chrome my code works fine. But in safari ... no sound.
Reading web audio api related questions with safari some people say that web audio api need to receive input from user in order to play sound.
In my case i have a button to be tapped in order to play the sound, a user input already. But it is not working.
I found an answer that tells the web audio api decodeAudiodata do not work with promises in safari and it must use an old syntax. I have tried the way the answer treat the decodeAudiodata but still no sound....
Please somebody can help me here? thanks for any help!
<button ontouchstart="bPlay1()">Button to play sound</button>
window.AudioContext = window.AudioContext || window.webkitAudioContext;
const ctx = new AudioContext();
let au1;
window.fetch("./sons/BumboSub.ogg")
.then(response => response.arrayBuffer())
.then(arrayBuffer => ctx.decodeAudioData(arrayBuffer,
audioBuffer => {
au1 = audioBuffer;
return au1;
},
error =>
console.error(error)
));
function bPlay1(){
ctx.resume();
bot = "Botão 1";
var playSound1b = ctx.createBufferSource();
var vb1 = document.getElementById('sld1').value;
playSound1b.buffer = au1;
var gain1b = ctx.createGain();
playSound1b.connect(gain1b);
gain1b.connect(ctx.destination);
gain1b.connect(dest);
gain1b.gain.value = vb1;
console.log(au1); ///shows in console!
console.log(playSound1b); ///shows in console!
playSound1b.start(ctx.currentTime);
}
I'm working on developing an application that will capture audio from the browser in 5 second "chunks" (these are full audio files and not simply partial files), send these 5 second chunks to the server, convert it from webm to mp3 on the server, and then broadcast the mp3 file to clients connected via a websocket or a static url.
I've successfully managed to do parts 1 and 2; however, I'm not quite sure the best approach to transmit this created mp3 audio file to the user. My thinking was to generate a single url for clients to listen in to, e.g http://localhost/livestream.mp3 (a live stream url that would automatically update itself with the latest audio data), or to emit the audio files to the clients over a websocket and attempt to play these sequenced audio files seamlessly without any noticeable gaps between the audio files as they switch out.
Here's a snippet of my [typescript] code where I create the mp3 file, and I've pointed out the area in which I would perform the writestream and from there I would expect to pipe this to users when they make an HTTP req.
private createAudioFile(audioObj: StreamObject, socket: SocketIO.Socket) : void {
const directory: string = `${__dirname}/streams/live`;
fs.writeFile(`${directory}/audio_${audioObj.time}.webm`, audioObj.stream, (err: NodeJS.ErrnoException) => {
if (err) logger.default.info(err.toString());
try {
const process: childprocess.ChildProcess = childprocess.spawn('ffmpeg', ['-i', `${directory}/audio_${audioObj.time}.webm`, `${directory}/audio_${audioObj.time}.mp3`]);
process.on('exit', () => {
// Ideally, this is where I would be broadcasting the audio from
// the static URL by adding the new stream data to it, or by
// emitting it out to all clients connected to my websocket
// const wso = fs.createWriteStream(`${directory}/live.mp3`);
// const rso = fs.createReadStream(`${directory}/audio_${audioObj.time}.mp3`);
// rso.pipe(wso);
if (audioObj.last == true) {
this.archiveAudio(directory, audioObj.streamName);
}
});
} catch (e) {
logger.default.error('CRITICAL ERROR: Exception occurred when converting file to mp3:');
logger.default.error(e);
}
});
}
I've seen a number of questions out there that ask for a similar concept, but not quite the final goal that I'm looking for. Is there a way to make this work?
I have an internet audio stream that's constantly being broadcast (accessible via http url), and I want to somehow record that with NodeJS and write files that consist of one-minute segments.
Every module or article I find on the subject is all about streaming from NodeJS to the browser. I just want to open the stream and record it (time block by time block) to files.
Any ideas?
I think the project at https://github.com/TooTallNate/node-icy makes this easy, just do what you need to with the res object, in the example it is sent to the audio system:
var icy = require('icy');
var lame = require('lame');
var Speaker = require('speaker');
// URL to a known ICY stream
var url = 'http://firewall.pulsradio.com';
// connect to the remote stream
icy.get(url, function (res) {
// log the HTTP response headers
console.error(res.headers);
// log any "metadata" events that happen
res.on('metadata', function (metadata) {
var parsed = icy.parse(metadata);
console.error(parsed);
});
// Let's play the music (assuming MP3 data).
// lame decodes and Speaker sends to speakers!
res.pipe(new lame.Decoder())
.pipe(new Speaker());
});
I'm trying to write a Meteor.JS app that uses peer to peer "radio" communication. When a user presses a button it broadcasts their microphone output to people.
I have some code that gets permission to record audio, and it successfully gets a MediaStream object, but I can't figure out how to get the data from the MediaStream object as it is being recorded.
I see there is an method defined somewhere for getting all of the tracks of the recorded audio. I'm sure I could find a way to write some kind of loop that notifies me when audio has been added, but it seems like there should be a native, event-driven way to retrieve the audio from getUserMedia. Am I missing something? Thanks
What you will want to do is to access the stream through the AudioAPI(for the recording part). This is after assigning a var to your stream that was grabbed through getUserMedia (I call it localStream). So, you can create as many MediaStreamsource nodes as you want from one stream, so you can record it WHILE sending it to numerous people through different rtcpeerconnections.
var audioContext = new webkitAudioContext() || AudioContext();
var source = audioContext.createMediastreamSource(localStream);
var AudioRecorder = function (source) {
var recording = false;
var worker = new Worker(WORKER_PATH);
var config = {};
var bufferLen = 4096;
this.context = source.context;
this.node = (this.context.createScriptProcessor ||
this.context.createJavaScriptNode).call(this.context,
bufferLen, 2, 2);
this.node.onaudioprocess = function (e) {
var sample = e.inputBuffer.getChannelData(0);
//do what you want with the audio Sample, push to a blob or send over a WebSocket
}
source.connect(this.node);
this.node.connect(this.context.destination);
};
Here is a version I wrote/modified to send audio over websockets for recording on a server.
For sending the audio only when it is available, you COULD use websockets or a webrtc peerconnection.
You will grab the stream through getUserMedia success object(you should have a global variable that will be the stream for all your connection). And when it becomes available, you can use a signalling server to forward the requesting SDPs to the audio supplier. You can set it the requesting SDPs to receive only and your connection.
PeerConnection example 1
PeerConnection example 2
Try with code like this:
navigator.webkitGetUserMedia({audio: true, video: false},
function(stream) { // Success Callback
var audioElement = document.createElement("audio");
document.body.appendChild(audioElement);
audioElement.src = URL.createObjectURL(stream);
audioElement.play();
}, function () { // Error callback
console.log("error")
});
You may use the stream from success callback to create a object URL and pass it into an HTML5 audio element.
Fiddle around in http://jsfiddle.net/veritas/2B9Pq/
Hy!
I would like to create an application which supports TTML typed closed captions.
My ism/manifest file contains the TTML based closed caption, I would like to ask how can I use it?
I found this site,
https://developers.google.com/cast/docs/player
where they described the following:
Segmented TTML & WebVTT
Use Segmented TTML for Smooth Streaming and WebVTT - Web Video Text Tracks for HLS.
To enable:
protocol_.enableStream(streamIndex, true);
player_.enableCaptions(true);
But I can't find an example for my problem. Do I have to enable this after creating my host at the receiver side?
Are there any sample app for this?
UPDATE #1
Here's my code:
window.onload = function() {
var mediaElement = document.getElementById('video'); //video is a html video tag
var mediamanager = new cast.receiver.MediaManager(mediaElement);
var url = "http://playready.directtaps.net/smoothstreaming/SSWSS720H264/SuperSpeedway_720.ism/Manifest"; //Just a sample URL
var host = new cast.player.api.Host({ 'mediaElement': mediaElement, 'url': url });
window.player = new cast.player.api.Player(host);
protocol = cast.player.api.CreateSmoothStreamingProtocol(host);
var initStart = 0;
window.player.load(protocol, initStart);
mediamanager.loadedmetadata = function(loadinfo) {
//onMetadataLoaded fired, set the caption
}
}
It doesn't work. So I decided to get the streams:
var streamCount = protocol.getStreamCount();
And streamCount contains 0. The manifest contains the closed caption, should I use something else, not the getStreamCount()?
Thank you very much!
The correct approach is to listen for metadataloaded event. Once that event is fired, then you are good to get the stream count, but don't do that before that event is fired. Then you can enable the stream for the index that you want (for the language that you want, in case you have multiple ones) and then enable caption. If you want to change language, you first need to disable caption and then select a different stream index and then enable it again.
No samples yet.
Yes - you should enable after creating the Host and starting playback.