Fragmented mp4 for DASH and HLS On Demand vs Live Profiles - http-live-streaming

I'm experimenting with Bento4 and Shaka Packager to output files for both DASH and HLS using fragmented mp4.
I'm having some trouble understanding the differences and pros and cons between the MPEG-DASH Live and On-Demand profiles. If I was streaming live broadcast content I would use the Live profile but for static on demand videos it seems I can use the On-Demand or Live profile. Each profile outputs files in a completely different file format and folder structure with On-Demand outputting a flat folder structure containing .mp4 files and Live outputting a nested folder structure containing m4s files.
Is it advisable to use one profile rather than the other for static video content that will not be broadcast live (e.g. browser support, efficacy etc) and if so why?

The "live" profile is somewhat of a misnomer, because it isn't really related to live streaming. The main difference is that with the on-demand profile, the server hosts large flat files with many segments per file (where a segment is a short portion of a media asset, like audio or video, typically 2 to 10 seconds each), including an index of where the segments are in the file. It is then up to the streaming client to access the segments one by one by doing HTTP "range" requests to access portions of the media assets. For the "live" profile, segments are not accessed as ranges in a flat resource, but as a separate resource for each segment (a separate URL for each segment). This doesn't necessarily mean that the HTTP server needs to have the segments in separate files, but it need to be able to map each segment URL to its corresponding media, either by performing itself a lookup in an index into a flat file, or by having each segment in a separate file, or by any other means. So it is up to the server to do the heavy lifting (as opposed to the "on-demand" profile where it is the player/client that does that.
With packagers like Bento4, if there's no special assumptions made of the HTTP server that will server the media, the default mode for the "live" profile is to store each segment in a separate file, so that the stream can be served by any off-the-shelf HTTP server.
So, for simplicity, if your player supports the on-demand profile, that's an easier one to choose, since you'll have fewer files.

Related

Is there a package to prepare MPD for video streaming from multiple identical HTTP sources for MPEG-DASH?

I would like to program something that can play using MPEG-DASH for use in a web video player such as shaka-player. I've been looking at using node.js and a setup similar to D-DASH.
So say I have bugsbunny.mp4 mirrored on 10 different servers, I would like to have a script that acts in a pseudo-torrenty way and creates an MPD from those .mp4's based on whichever server is the fastest to respond for each chunk/segment of the video.
(This is kind of like a CDN acts but rather than optimize based on the client's location the selection of CDN servers to use to transfer data, I would like to optimize the streaming for each chunk in the MPD by checking and selecting the best URL)
other requirments:
- cannot pre-process or encode the .mp4 to create pre-carved segments (I need to use these mp4's at run-time as is)
- would like to create the manifest MPD on the client machine, not on the server
I'm note sure the design approach to take for this. I've heard of P2P projects like Peer5 but I don't know if that's the way I want to go. I want to be able to consider multiple HTTP sources for video files and select the URL source for segments on the fly. If anyone could recommend general approaches to achieve this, that would be great.

Using nodeJS for streaming videos

I am planning to write a nodeJS server for streaming videos, One of my critical requirement is
to prevent video download( as much as possible ), something similar to safaribooksonline.com
I am planning to use amazon s3 for storage and nodeJS for streaming the videos to the client.
I want to know if nodeJS is the right tool for streaming videos( max size 100mb ) for an application expecting lot of users. If not then what are the alternatives ?
Let me know if any additional details are required.
In very simple terms you can't prevent video download. If a rogue client wants to do it they they generally can - the video has to make it to the client for the client to be able to play it back.
What is most commonly done is to encrypt the video so the downloaded version is unplayable without the right decryption key. A DRM system will allow the client play the video, without being able to copy it (depending on how determined the user is - a high quality camera pointed at a high quality screen is hard to protect against (!). In these scenarios, other tracing technologies come in to play).
As others have mentioned in the comments, streaming servers are not simple - they have to handle a wide range or encoders, packaging formats, streaming formats etc to allow as much each as possible and will have quite complicated mechanisms to ensure speed and reduce file storage requirements.
It might be an idea to look at some open source streaming servers to get a feel for the area, for example:
VideoLan (http://www.videolan.org/vlc/streaming.html)
GStreamer (https://gstreamer.freedesktop.org)
You can still use noedejs for the main web server component of your solution and just hand off the video streaming to the specialised streaming engines, if this meets your needs.

Maximizing Encoding Speed with Windows Azure Media Service Encoding

I have an android application (client), asp.net web api web server (server), and Windows Azure Media Services (WAMS) account.
What I Want: To upload a 3-30 second video from the client to the server and have it encoded with WAMS then available for streaming via HLSv3 as quickly as possible. Ideally a video preview image would be generated as well. As fast as possible is something like sub one minute turn around. That's likely not realistic, I realize, but the faster the better.
Where I'm At: We upload the video to the server as a stream, which then stores it in Azure blob storage. The server returns to the client indicating upload success. The server has an action that kicks off the encoding which then get's called. I run a custom encoding task based off of the H264 Adaptive Bitrate MP4 Set 720p preset modified for taking a 640x480 video and cropping it to 480x480 at the same time as encoding. Then I run a thumbnail job that generates one thumbnail at 480x480. Depending on the reserved encoder quality this can take ~5 mins to ~2 mins. The encoding job time is only 30-60 seconds of that and the rest is a mix of queue time, publishing time, and communication delay.
What can I do to improve the client upload to video streamable turn around time? Where are the bottle necks in the encoding process? Is there a reasonable max speed that can be achieved? Are there config settings that can be tweaked to improve the process performance?
Reduce the number of jobs
The first thing that springs to mind is given you're only interested in a single thumbnail, you should be able to consolidate your encode and thumbnail jobs by adding something like this to the MediaFile element of your encode preset:
<MediaFile ThumbnailTime="00:00:00"
ThumbnailMode="BestFrame"
ThumbnailJpegCompression="75"
ThumbnailCodec="Jpeg"
ThumbnailSize="480, 480"
ThumbnailEmbed="False">
The thumbnail will end up in the output asset along with your video streams.
Reduce the number of presets in the task
Another thing to consider is that the preset that you linked to has multiple presets defined within it in order to produce audio streams at different bitrates. My current understanding is that each of these presets is processed sequentially by the encode unit.
The first preset defines the video streams, and also specifies that each video stream should have the audio muxed in at 96kbps. This means that your video files will be larger than they probably need to be, and some time will be taken up in the muxing process.
The second and third presets just define the audio streams to output - these wouldn't contain any video. The first of these outputs the audio at 96kbps, the second at 56kbps.
Assuming you're happy with a fixed audio quality of 96kbps, I would suggest removing the audio from the video streams and the last of the audio streams (56kbps) - that would save the same audio stream being encoded twice, and audio being muxed in with the video. (Given what I can tell from your usage, you probably don't need that anyway)
The side benefit of this would be that your encoder output file size will go down marginally, and hence the cost of encodes will too.
Workflow optimisation
The only other point I would is regarding the workflow by which you get your video files into Azure in the first place. You say that you're uploading them into blob storage - I assume that you're subsequently copying them into an AMS asset so they can be configured as inputs for the job. If that's right, you may save a bit of time by uploading directly into an asset.
Hope that helps, and good luck!

Images in Web application

I am working on application in which users will upload huge number of images and i have to show those image webpage
What is the best way to store and retrieve images.
1) Database
2) FileSystem
3) CDN
4) JCR
or something else
What i know is
Database: saving and retrieving image from database will lead to lot of queries to database and will convert blob to file everytime. I think it will degrade the website performance
FileSystem: If i keep image information in database and image file in filesystem there will be sync issues. Like if i took a backup of the database we do have take the backup of images folder. ANd if there are millions of files it will consume lot of server resources
i read it here
http://akashkava.com/blog/127/huge-file-storage-in-database-instead-of-file-system/
Another options are CDNs and JCR
Please suggest the best option
Regards
Using the File System is only really an option if you only plan to deploy to one server (i.e. not several behind a load balancer), OR if all of your servers will have access to a shared File System. It may also be inefficient, unless you cache frequently-accessed files in the application server.
You're right that storing the actual binary data in a Database is perhaps overkill, and not what databases do best.
I'd suggest a combination:
A CDN (such as AWS CloudFront), backed by a publicly-accessible (but crucially publicly read-only) storage such as Amazon S3 would mean that your images are efficiently served, wherever the browsing user is located and cached appropriately in their browser (thus minimising bandwidth). S3 (or similar) means you have an API to upload and manage them from your application servers, without worrying about how all servers (and the outside world) will access them.
I'd suggest perhaps holding meta data about each image in a Database however. This means that you could assign each image a unique key (generated by your database), add extra info (file format, size, tags, author, etc), and also store the path to S3 (or similar) via the CDN as the publicly-accessible path to the image.
This combination of Database and shared publicly-accessible storage is probably a good mix, giving you the best of both worlds. The Database also means that if you need to move / change or bulk delete images in future (perhaps deleting all images uploaded by an author who is deleting their account), you can perform an efficient Database query to gather metadata, followed by updating / changing the stored images at the S3 locations the Database says they exist.
You say you want to display the images on a web page. This combination means that the application server can query the database efficiently for the image selection you want to show (including restricting by author, pagination, etc), then generate a view containing images referring to the correct CDN path. It means viewing the images is also quite efficient as you combine dynamic content (the page upon which the images are shown) with static content (the image themselves via the CDN).
CDNs may be a good option for you.
You can store the link to the images along with the image information in your database.

Approach for File System Based Data Storage for Web Application

I am looking for optimal approach to use file system based data storage in Web Application.
Basically, I am developing a Google Chrome Extension, which is in fact a Content Script based. Its functionality is as follows:
The extension is a content script based one and it will be fired for each webpage user visits.
The extension will fetch some data continuously (every 5/10 seconds) from a database from a Cross-Browser Location (in JSON format) and display that data in the form of ticker at each webpage. Content Script will modify the DOM of web pages to display the ticker.
For above scheme, I have noticed a fact that the continuous fetching of data increases server's and client's bandwidth consumption a lot. Hence, I am planning for an approach to maintain the data in a file system, which will be bundled in the extension only and will be accessed locally to avoid bandwidth consumption.
The file system, I can maintain is text, CSV or even XML also. But the issue is that I need to read the data files through Javascript, JQuery or AJAX. All these languages are not having efficient file-handling and file access mechanisms.
Can anyone please suggest approach for optimum solution with file access-mechanisms for above problem?
Also, if you can suggest whole new approach other than File System based data storage, that will be really helpful to me.
If all you need is read some data from files then you can bundle those files with the extension.
In which format to store data in those files - I can think of 3 options:
You can read XML files through XMLHttpRequest and parse them with jQuery. It is very easy and powerful with all jquery selectors at your disposal.
You can store data in JSON format, read it the same way and parse it with JSON.parse()
You can directly make javascript objects out of your data and simply include this file into your background page through <script src="local.js"> tag. So your local.js would look something like this:
var data = [{obj1: "value1"}, ...];
I have used XML for years - based on the advice from Microsoft, stating that small volume site can do this.
But XML almost always loads all of the document - hence the size of this will influense performance.
I did some +40.000 nodes in different browsers three years ago and strange enough - ms seems to be the one that can handle this :)
and AJAX was created to stream XML

Resources