NodeJS Express video streaming server with server controlled range - node.js

So I don't currently have any code but its just a general question. I've seen multiple articles and SO Questions that handle this issue except that in all of those the byte range header, that essentially specifies what time segment of the video is sent back to the client, is also specified by the client. I want the server to keep track of the current video position and stream the video back to the client.
The articles and SO Questions I've seen for reference:
https://blog.logrocket.com/build-video-streaming-server-node/
Streaming a video file to an html5 video player with Node.js so that the video controls continue to work?

Here's a solution that does not involve explicitly changing the header for the <video> tag src request or controlling the byte range piped from the server.
The video element has a property called currentTime (in seconds) which allows the client to control its own range header. A separate request to the server for an initial value for 'currentTime' would allow the server to control the start time of the video.
<video id="video"><source src="/srcendpoint" muted type="video/mp4" /></video>
<script>
const video = document.getElementById("videoPlayer")
getCurrentSecAndPlay()
async function getCurrentSecAndPlay() {
let response = await fetch('/currentPosition').then(response => response.json())
video.currentTime += response.currentSec
video.play()
}
</script>
This is sort of a work-around solution to mimic livestream behaviour. Maybe it's good enough for you. I do not have a lot of knowledge on HLS and RTMP but if you wanted to make a true livestream you should study those things.
sources:
https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement
https://www.w3.org/2010/05/video/mediaevents.html

Related

Screen Recording with both headphone & system audio

I am trying to build a web-application with the functionality of screen-recording with system audio + headphone-mic audio being captured in the saved video.
I have been thoroughly googling on a solution for this, however my findings show multiple browser solutions where the above works so long as headphones are NOT connected, meaning the microphone input is coming from the system rather than headset.
In the case that you connect headphones, all of these solutions capture the screen without video-audio, and the microphone audio from my headset. So to re-clarify on this, it should have recorded video-audio from the video being played whilst recording, and the headset-mic audio also.
This is thoroughly available in native applications, however I am searching for a way to do this on a browser.
If there are no solutions for this currently that anybody knows of, some insight on the limitations around developing this would also really help, thank you.
Your browser manages the media input being received in the selected tab/window
To receive media input, you need to ensure you have the checkbox Share Audio in the image below checked. However this will only record media-audio being played in your headphones, when it comes to receiving microphone audio, the opposite must be done i.e the checkbox should be unchecked, or merge the microphone audio separately on saving the recorded video
https://slack-files.com/T1JA07M6W-F0297CM7F32-89e7407216
create two const, one retrieving on-screen video, other retrieving audio media:
const DISPLAY_STREAM = await navigator.mediaDevices.getDisplayMedia({video: {cursor: "motion"}, audio: {'echoCancellation': true}}); // retrieving screen-media
const VOICE_STREAM = await navigator.mediaDevices.getUserMedia({ audio: {'echoCancellation': true}, video: false }); // retrieving microphone-media
Use AudioContext to retrieve audio sources from getUserMedia() and getDisplayMedia() separately:
const AUDIO_CONTEXT = new AudioContext();
const MEDIA_AUDIO = AUDIO_CONTEXT.createMediaStreamSource(DISPLAY_STREAM); // passing source of on-screen audio
const MIC_AUDIO = AUDIO_CONTEXT.createMediaStreamSource(VOICE_STREAM); // passing source of microphone audio
Use the method below to create a new audio source which will be used as as the merger or merged version of audio, then passing audios into the merger:
const AUDIO_MERGER = AUDIO_CONTEXT.createMediaStreamDestination(); // audio merger
MEDIA_AUDIO.connect(AUDIO_MERGER); // passing media-audio to merger
MIC_AUDIO.connect(AUDIO_MERGER); // passing microphone-audio to merger
Finally, connect the merged-audio and video together into one array to form a track, and pass it to the MediaStreamer:
const TRACKS = [...DISPLAY_STREAM.getVideoTracks(), ...AUDIO_MERGER.stream.getTracks()] // connecting on-screen video with merged-audio
stream = new MediaStream(TRACKS);

Does v3 Google Cast receiver parse alternative audio tracks from an hls master playlist automatically or do I have to define them in the sender?

I'm trying to get a multi-audio HLS stream working on a v3 Google Cast custom receiver app. The master playlist of the stream refers to several video renditions of different resolution and two alternative audio tracks:
#EXTM3U
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="TV Ton",DEFAULT=YES, AUTOSELECT=YES,URI="index_1_a.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="Audiodeskription",DEFAULT=NO, AUTOSELECT=NO,URI="index_2_a.m3u8"
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=383000,RESOLUTION=320x176,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_0_av.m3u8
...more renditions
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=3697000,RESOLUTION=1280x720,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_6_av.m3u8
The video plays fine in both the sender and receiver app, I can see both audio tracks in the sender app, but when casting to the receiver there are no controls for changing the audio tracks.
When accessing the AudioTracksManager's getTracks() method while intercepting the LOAD message like so...
playerManager.setMessageInterceptor(
cast.framework.messages.MessageType.LOAD, loadRequestData => {
loadRequestData.media.hlsSegmentFormat = cast.framework.messages.HlsSegmentFormat.TS
const audioTracksManager = playerManager.getAudioTracksManager();
console.log(audioTracksManager.getTracks())
console.log('Load request: ', loadRequestData);
return loadRequestData;
});
I get an error saying:
Uncaught Error: Tracks info is not available.
Maybe unrelated, but super weird: I can console.log the request's media prop and see its tracks prop (an array with the expected 1 video and 2 audio tracks), however, if I try to access the tracks property in the LOAD message interceptor I get undefined.
I currently cannot look into the iOS sender code yet, so I tried to eliminate error sources on the receiver end. The thing is:
I always assumed that the receiver identifies alternative audio tracks on its own when loading HLS playlists. Is this assumption correct or can the AudioTracksManager only access tracks that have been previously defined in a sender app?
I couldn't find a clear statement on that in the Google Cast reference...
Ok, feeling stupid for the time I spent on this, but I'm finally able to answer my own question. I didn't realize that I was accessing the AudioTracksManager in the wrong place - namely in the LOAD message interceptor instead of in a PLAYER_LOAD_COMPLETE event listener (as it is properly documented here)
After placing my logic into this event listener I was able to access and programmatically set my audio tracks.
So to answer my original question: Yes, the receiver app automatically identifies alternative audio tracks from an HLS playlist.

Playing short wav files in Google Home

I would like to play a short sound for a more amusing output. If I understand the documentation correctly it should be possible with a reply in api.ai of something like this SSML:
<speak>Okay here we go: <audio src="http://example.com/boing.wav">boing</audio>. You are welcome!</speak>
Just for reference SSML means Speech Synthesis Markup Language.
The web simulator don't play this sound instead all tags seems to be stripped out. Is that not supported yet or did I do something wrong?
The src URL must also be an https URL (Google Cloud Storage can host your audio files on an https URL).
https://developers.google.com/actions/reference/ssml
Without seeing your source, there are a few possible reasons:
The audio file must be served publicly via HTTPS, not HTTP. See the description for <audio> on https://developers.google.com/actions/reference/ssml
The audio file should be in a correct format (see https://developers.google.com/actions/reference/ssml again).
If you're returning it via the webhook response, you need to make sure you set the data.google.is_ssml property in the JSON to true in https://developers.google.com/actions/reference/webhook-format#response
I have the following for my node.js server which works (well, except for the URL):
var msg = `
<speak>
Tone one
<audio src="https://examaple.com/wav/Dtmf-1.wav"></audio>
Tone two
<audio src="https://example.com/wav16/Dtmf-2.wav"></audio>
Foghorn
<audio src="https://example.com/mp3/foghorn.mp3"></audio>
Done
</speak>
`;
var reply = {
speech: msg,
data:{
google:{
"expect_user_response": true,
"is_ssml": true
}
}
};
res.send( reply );
So here is what I have for the code. It is in the Text responds field one my intent.
<speak> One second <break time="3s"/> OK, I have used the best quantum processing algorithms known to computer science! Your silly name is $color $number. I hope you like it. <audio src="https://www.partnersinrhyme.com/files/sounds1/WAV/sports/baseball/Ball_Hit_Cheer.wav"></audio> </speak>
It does not work in the testing area of the api(dot)ai field, but does work when I turn on the integration and try it at the Google simiulator. here: https://developers.google.com/actions/tools/web-simulator

Chrome "stalling" when streaming mp3 file from nodejs windows only

We've got a really annoying bug when trying to send mp3 data. We've got the following set up.
Web cam producing aac -> ffmpeg convert to adts -> send to nodejs server -> ffmpeg on server converts adts to mp3 -> mp3 then streamed to browser.
This works *perfectly" on Linux ( chrome with HTML5 and flash, firefox flash only )
However on windows the sound just "stalls", no matter what combination we use ( browser/html5/flash ). If however we shutdown the server the sound then immediately starts to play as we expect.
For some reason on windows based machines it's as if the sound is being buffered "waiting" for something but we don't know what that is.
Any help would be greatly appreciated.
Relevant code in node
res.setHeader('Connection', 'Transfer-Encoding');
res.setHeader('Content-Type', 'audio/mpeg');
res.setHeader('Transfer-Encoding', 'chunked');
res.writeHeader('206');
that.eventEmitter.on('soundData', function (data) {
debug("Got sound data" + data.cameraId + " " + req.params.camera_id);
if (req.params.camera_id == data.cameraId) {
debug("Sending data direct to browser");
res.write(data.sound);
}
});
Code on browser
soundManager.setup({
url: 'http://dashboard.agricamera.co.uk/themes/agricamv2/swf/soundmanager2.swf',
useHTML5Audio: false,
onready: function () {
that.log("Sound manager is now ready")
var mySound = soundManager.createSound({
url: src,
autoLoad: true,
autoPlay: true,
stream: true,
});
}
});
If however we shutdown the server the sound then immediately starts to play as we expect.
For some reason on windows based machines it's as if the sound is being buffered "waiting" for something but we don't know what that is.
That's exactly what's happening.
First off, chrome can play ADTS streams so if possible, just use that directly and save yourself some audio quality by not having to use a second lossy codec in the chain.
Next, don't use soundManager, or at least let it use HTML5 audio. You don't need the Flash fallback these days in most cases, and Chrome is perfectly capable of playing your streams. I suspect this is where your problem lies.
Next, try disabling chunked transfer. Many clients don't like transfer encoding on streams.
Finally, I have seen cases where Chrome's built-in media handling (which I believe varies from OS to OS) cannot sync to the stream. There are a few bug tickets out there for Chromium. If your playback timer isn't incrementing, this is likely your problem and you can simply try to reload the stream programmatically to work around it.

Can JPlayer play file from byte array?

I am using JPlayer which plays different audio files based on the the user input. Every time a user enters an input, I am calling a REST web service to retrieve the audio file to play. The response from the REST service is a byte[].
What I am trying to achieve is to save this array of byte in memory instead of writing it to a file and use that byte[] for jplayer. I am not sure how to get JPlayer to play a byte[].
var file = [[${audiofile}]]
$(document).ready(function(){
$("#jquery_jplayer_1").jPlayer(
{
ready: function () {
$(this).jPlayer("setMedia",{
wav: file
});
},
the variable file evaluates to a byte[]. When trying to play the audio, I see the following error in console.
Uncaught TypeError: Object
-1,-5,-112,-60,0,3,19,124,-73.......
I will appreciate if somebody has any suggestions.
Thanks
Unfortunately, this feature looks unimplemented right now. From the source code of JPlayer 2.9.2 from line 1945:
setMedia: function(media) {
/* media[format] = String: URL of format. Must contain all of the supplied option's video or audio formats.
* media.poster = String: Video poster URL.
* media.track = Array: Of objects defining the track element: kind, src, srclang, label, def.
* media.stream = Boolean: * NOT IMPLEMENTED * Designating actual media streams. ie., "false/undefined" for files. Plan to refresh the flash every so often.
*/
Please notice the last line. Wish I had better news.
REF: https://github.com/happyworm/jPlayer/blob/master/src/javascript/jplayer/jquery.jplayer.js
Some fellows came close to a solution, but not with JPlayer. REF: How to play audio byte array (not file!) with JavaScript in a browser

Resources