How to get bitrate in dash.js player while streaming the video? - mpeg-dash

I am setting a dash Server in mininet and i want to store the bit rates by which video streams and also the quality of video chunks that dash server is sending in that bit rate. How would I get and store this all information in some file?

The DASH.js player actually includes some event call-backs to handle this type of event.
Take a look at the source code on Github at dash.js/src/streaming/MediaPlayerEvents.js
/**
* Triggered when an ABR up /down switch is initiated; either by user in manual mode or auto mode via ABR rules.
* #event MediaPlayerEvents#QUALITY_CHANGE_REQUESTED
*/
this.QUALITY_CHANGE_REQUESTED = 'qualityChangeRequested';
/**
* Triggered when the new ABR quality is being rendered on-screen.
* #event MediaPlayerEvents#QUALITY_CHANGE_RENDERED
*/
this.QUALITY_CHANGE_RENDERED = 'qualityChangeRendered';

Related

NodeJS Express video streaming server with server controlled range

So I don't currently have any code but its just a general question. I've seen multiple articles and SO Questions that handle this issue except that in all of those the byte range header, that essentially specifies what time segment of the video is sent back to the client, is also specified by the client. I want the server to keep track of the current video position and stream the video back to the client.
The articles and SO Questions I've seen for reference:
https://blog.logrocket.com/build-video-streaming-server-node/
Streaming a video file to an html5 video player with Node.js so that the video controls continue to work?
Here's a solution that does not involve explicitly changing the header for the <video> tag src request or controlling the byte range piped from the server.
The video element has a property called currentTime (in seconds) which allows the client to control its own range header. A separate request to the server for an initial value for 'currentTime' would allow the server to control the start time of the video.
<video id="video"><source src="/srcendpoint" muted type="video/mp4" /></video>
<script>
const video = document.getElementById("videoPlayer")
getCurrentSecAndPlay()
async function getCurrentSecAndPlay() {
let response = await fetch('/currentPosition').then(response => response.json())
video.currentTime += response.currentSec
video.play()
}
</script>
This is sort of a work-around solution to mimic livestream behaviour. Maybe it's good enough for you. I do not have a lot of knowledge on HLS and RTMP but if you wanted to make a true livestream you should study those things.
sources:
https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement
https://www.w3.org/2010/05/video/mediaevents.html

Screen Recording with both headphone & system audio

I am trying to build a web-application with the functionality of screen-recording with system audio + headphone-mic audio being captured in the saved video.
I have been thoroughly googling on a solution for this, however my findings show multiple browser solutions where the above works so long as headphones are NOT connected, meaning the microphone input is coming from the system rather than headset.
In the case that you connect headphones, all of these solutions capture the screen without video-audio, and the microphone audio from my headset. So to re-clarify on this, it should have recorded video-audio from the video being played whilst recording, and the headset-mic audio also.
This is thoroughly available in native applications, however I am searching for a way to do this on a browser.
If there are no solutions for this currently that anybody knows of, some insight on the limitations around developing this would also really help, thank you.
Your browser manages the media input being received in the selected tab/window
To receive media input, you need to ensure you have the checkbox Share Audio in the image below checked. However this will only record media-audio being played in your headphones, when it comes to receiving microphone audio, the opposite must be done i.e the checkbox should be unchecked, or merge the microphone audio separately on saving the recorded video
https://slack-files.com/T1JA07M6W-F0297CM7F32-89e7407216
create two const, one retrieving on-screen video, other retrieving audio media:
const DISPLAY_STREAM = await navigator.mediaDevices.getDisplayMedia({video: {cursor: "motion"}, audio: {'echoCancellation': true}}); // retrieving screen-media
const VOICE_STREAM = await navigator.mediaDevices.getUserMedia({ audio: {'echoCancellation': true}, video: false }); // retrieving microphone-media
Use AudioContext to retrieve audio sources from getUserMedia() and getDisplayMedia() separately:
const AUDIO_CONTEXT = new AudioContext();
const MEDIA_AUDIO = AUDIO_CONTEXT.createMediaStreamSource(DISPLAY_STREAM); // passing source of on-screen audio
const MIC_AUDIO = AUDIO_CONTEXT.createMediaStreamSource(VOICE_STREAM); // passing source of microphone audio
Use the method below to create a new audio source which will be used as as the merger or merged version of audio, then passing audios into the merger:
const AUDIO_MERGER = AUDIO_CONTEXT.createMediaStreamDestination(); // audio merger
MEDIA_AUDIO.connect(AUDIO_MERGER); // passing media-audio to merger
MIC_AUDIO.connect(AUDIO_MERGER); // passing microphone-audio to merger
Finally, connect the merged-audio and video together into one array to form a track, and pass it to the MediaStreamer:
const TRACKS = [...DISPLAY_STREAM.getVideoTracks(), ...AUDIO_MERGER.stream.getTracks()] // connecting on-screen video with merged-audio
stream = new MediaStream(TRACKS);

Set Spotify playing position

I'm using the SpotifyAPI-NET on GitHub from JohnnyCrazy to play and pause songs on my Spotify desktop client. This works fine.
Now I want to change the playing position of the currently playing song. So I only want to say something like "SetPlayingPosition(64)" to play the current song from position "01:04". It seems that the SpotifyLocalAPI didn't support this feature.
To play and pause a song the API uses a message with the following format:
http://127.0.0.1:4381/remote/pause.json?pause=true&ref=&cors=&_=1520448230&oauth=oauth&csrf=csrf
I tried to find a summary of possible commands in this format, but I didn't find anything.
Is there something like http://127.0.0.1:4381/remote/seek.json... that I can use to seek to a specific position?
EDIT:
I tried to write my own method in the RemoteHandler class in the local portion of the SpotifyAPI. With this method I can set the position in the current playback.
Here's my code:
internal async Task SendPositionRequest(double playingPositionSec) //The desired playback position in seconds
{
StatusResponse status = GetNewStatus(); //Get the current status of the local desktop API
string trackUri = "spotify:track:" + status.Track.TrackResource.ParseUri().Id; //The URI of the current track
TimeSpan playingPositionTimeSpan = TimeSpan.FromSeconds(playingPositionSec);
string playingPosStr = playingPositionTimeSpan.ToString(#"mm\:ss"); //Convert the playingPosition to a string (Format mm:ss)
string playingContext = "spotify:artist:1EfwyuCzDQpCslZc8C9gkG";
await SendPlayRequest(trackUri + "#" + playingPosStr, playingContext);
if (!status.Playing) { await SendPauseRequest(); }
}
I need to call the SendPlayRequest() method with the correct playingContext because when the current song is part of a playlist and you call SendPlayRequest() without the context, the next song isn't from the playlist anymore.
But you can see that I use a fixed context at the moment.
So my question is now: How can I get the context (playlist, artist, ...) of the currently played song with the SpotifyLocalAPI?
The SeekPlayback method of the library you mentioned lets you seek through playback on whatever device your user is listening on. You can find the docs here.
Seeking playback is not currently possible using the Spotify Local API portion of that library.

Can JPlayer play file from byte array?

I am using JPlayer which plays different audio files based on the the user input. Every time a user enters an input, I am calling a REST web service to retrieve the audio file to play. The response from the REST service is a byte[].
What I am trying to achieve is to save this array of byte in memory instead of writing it to a file and use that byte[] for jplayer. I am not sure how to get JPlayer to play a byte[].
var file = [[${audiofile}]]
$(document).ready(function(){
$("#jquery_jplayer_1").jPlayer(
{
ready: function () {
$(this).jPlayer("setMedia",{
wav: file
});
},
the variable file evaluates to a byte[]. When trying to play the audio, I see the following error in console.
Uncaught TypeError: Object
-1,-5,-112,-60,0,3,19,124,-73.......
I will appreciate if somebody has any suggestions.
Thanks
Unfortunately, this feature looks unimplemented right now. From the source code of JPlayer 2.9.2 from line 1945:
setMedia: function(media) {
/* media[format] = String: URL of format. Must contain all of the supplied option's video or audio formats.
* media.poster = String: Video poster URL.
* media.track = Array: Of objects defining the track element: kind, src, srclang, label, def.
* media.stream = Boolean: * NOT IMPLEMENTED * Designating actual media streams. ie., "false/undefined" for files. Plan to refresh the flash every so often.
*/
Please notice the last line. Wish I had better news.
REF: https://github.com/happyworm/jPlayer/blob/master/src/javascript/jplayer/jquery.jplayer.js
Some fellows came close to a solution, but not with JPlayer. REF: How to play audio byte array (not file!) with JavaScript in a browser

Windows Azure Media Services Apple HLS Streaming - No video plays only audio plays

I am using Windows Azure Media Services to upload video files, encode, and then publish them.
I encode the files using Windows Azure Media Services Samples code, and I have found that when I use the code to convert ".mp4" files to Apple HLS, it does not function properly in iOS devices. Only audio plays and no video is seen. Whereas, if I use Windows Azure Media Services Portal to encode and publish files in HLS, they work perfectly fine on iOS devices(both audio and video plays)!
I have been banging my head on this for days now and would be really obliged is somebody could guide me on the encoding process (through code)?
This is what I have till now!
static IAsset CreateEncodingJob(IAsset asset)
{
// Declare a new job.
IJob job = _context.Jobs.Create("My encoding job");
// Get a media processor reference, and pass to it the name of the
// processor to use for the specific task.
IMediaProcessor processor = GetLatestMediaProcessorByName("Windows Azure Media Encoder");
// Create a task with the encoding details, using a string preset.
ITask task = job.Tasks.AddNew("My encoding task",
processor,
"H264 Broadband SD 4x3",
TaskOptions.ProtectedConfiguration);
// Specify the input asset to be encoded.
task.InputAssets.Add(asset);
// Add an output asset to contain the results of the job.
// This output is specified as AssetCreationOptions.None, which
// means the output asset is in the clear (unencrypted).
task.OutputAssets.AddNew("Output MP4 asset",
true,
AssetCreationOptions.None);
// Launch the job.
job.Submit();
// Checks job progress and prints to the console.
CheckJobProgress(job.Id);
// Get an updated job reference, after waiting for the job
// on the thread in the CheckJobProgress method.
job = GetJob(job.Id);
// Get a reference to the output asset from the job.
IAsset outputAsset = job.OutputMediaAssets[0];
return outputAsset;
}
static IAsset CreateMp4ToSmoothJob(IAsset asset)
{
// Read the encryption configuration data into a string.
string configuration = File.ReadAllText(Path.GetFullPath(_configFilePath + #"\MediaPackager_MP4ToSmooth.xml"));
//Publish the asset.
//GetStreamingOriginLocatorformp4(asset.Id);
// Declare a new job.
IJob job = _context.Jobs.Create("My MP4 to Smooth job");
// Get a media processor reference, and pass to it the name of the
// processor to use for the specific task.
IMediaProcessor processor = GetLatestMediaProcessorByName("Windows Azure Media Packager");
// Create a task with the encoding details, using a configuration file. Specify
// the use of protected configuration, which encrypts sensitive config data.
ITask task = job.Tasks.AddNew("My Mp4 to Smooth Task",
processor,
configuration,
TaskOptions.ProtectedConfiguration);
// Specify the input asset to be encoded.
task.InputAssets.Add(asset);
// Add an output asset to contain the results of the job.
task.OutputAssets.AddNew("Output Smooth asset",
true,
AssetCreationOptions.None);
// Launch the job.
job.Submit();
// Checks job progress and prints to the console.
CheckJobProgress(job.Id);
job = GetJob(job.Id);
IAsset outputAsset = job.OutputMediaAssets[0];
// Optionally download the output to the local machine.
//DownloadAssetToLocal(job.Id, _outputIsmFolder);
return outputAsset;
}
// Shows how to encode from smooth streaming to Apple HLS format.
static IAsset CreateSmoothToHlsJob(IAsset outputSmoothAsset)
{
// Read the encryption configuration data into a string.
string configuration = File.ReadAllText(Path.GetFullPath(_configFilePath + #"\MediaPackager_SmoothToHLS.xml"));
//var getismfile = from p in outputSmoothAsset.Files
// where p.Name.EndsWith(".ism")
// select p;
//IAssetFile manifestFile = getismfile.First();
//manifestFile.IsPrimary = true;
var ismAssetFiles = outputSmoothAsset.AssetFiles.ToList().Where(f => f.Name.EndsWith(".ism", StringComparison.OrdinalIgnoreCase)).ToArray();
if (ismAssetFiles.Count() != 1)
throw new ArgumentException("The asset should have only one, .ism file");
ismAssetFiles.First().IsPrimary = true;
ismAssetFiles.First().Update();
//Use the smooth asset as input asset
IAsset asset = outputSmoothAsset;
// Declare a new job.
IJob job = _context.Jobs.Create("My Smooth Streams to Apple HLS job");
// Get a media processor reference, and pass to it the name of the
// processor to use for the specific task.
IMediaProcessor processor = GetMediaProcessor("Smooth Streams to HLS Task");
// Create a task with the encoding details, using a configuration file.
ITask task = job.Tasks.AddNew("My Smooth to HLS Task", processor, configuration, TaskOptions.ProtectedConfiguration);
// Specify the input asset to be encoded.
task.InputAssets.Add(asset);
// Add an output asset to contain the results of the job.
task.OutputAssets.AddNew("Output HLS asset", true, AssetCreationOptions.None);
// Launch the job.
job.Submit();
// Checks job progress and prints to the console.
CheckJobProgress(job.Id);
// Optionally download the output to the local machine.
//DownloadAssetToLocal(job.Id, outputFolder);
job = GetJob(job.Id);
IAsset outputAsset = job.OutputMediaAssets[0];
return outputAsset;
}
In order to convert to an iOS compatible HLS, you have to use a Smooth Streaming Source, which would be the base for HLS. So your steps would be:
Convert your source to high quality H.264 (MP4)
Convert the result from step (1) into Microsoft Smooth Streaming
Convert the result from step (2) (the Smooth Streaming) into HLS
HLS is very similar to Microsoft Smooth Streaming. Thus it needs chunks of the source with different bitrates. Doing HLS conversion over MP4 will do nothing.
It is sad IMO that Microsoft provides such explorative features in the management portal. This leads to confused users. What does it do under the scene is exactly what I suggest to you - first gets a high quality MP4, then convert it to Microsoft Smooth streaming, then do the HLS over the Smooth Streaming. But the user things that HLS is performed over the MP4, which is totally wrong.
If we take a look at the online documentation here, we will see that the task preset is named Convert Smooth Streams to Apple HTTP Live Streams. From where we have to figure out that the correct source for HLS is Microsoft Smooth Stream. And from my experience a good Smooth Stream can only be produced from a good H.264 source (MP4). If you try to convert a non H.264 source into a Smooth Stream, the result will most probably be an error.
You can experiment with the little tool WaMediaWeb (source on github with continuous delivery to Azure WebSites), here live: http://wamediaweb.azurewebsites.net/ - just provide your Media Account and Key. Take a look at the readme on GitHub for some specifics, such as what source produces what result.
By the way, you can stack tasks in a single job, to avoid constant looking for job result. The method task.OutputAssets.AddNew(...) actually returns an IAsset, which you can use as an InputAsset for another task, and add that task to the same job. If you look at the example it does this at some point. It also does job well on creating HLS streams, tested on iOS with iPad2 and iPhone 4.

Resources