I'm currently on edge with what container I should use for the videos I put on my website.
I recently started uploaded videos of game play/walkthroughs and saw the need for a container that could hold HD video without limitations on file size, codecs (AAC or AVC), or resolution (in the future I want to be able to support 5K video) and 5.1 Dolby digital and up audio. Of course I don't expect the 5K to be efficient at being streamed, I just want it to be available.
This is where the confusion started.
I currently use the .flv container because people state it is all around better. Less resource consumptive, widely used, and supports the common codecs. The problem with this is simple. It cannot support the HD content I want to show: 5.1 dolby audio and limitless file size.
MP4 is everything I need, but I heard that it can be slow to respond, pseudostreaming modules are not widely accepted by browsers, and I don't have time to change containers everytime someone wants to update to .mp5, 6, 12, etc.
That's where I am including .mkv as the container. .MKV also supports everything I want (HD, 3D), all codecs, universal, and limitless file attributes. THE ONLY problem is that it cannot be streamed.
I know this is a programmers site, but may be in the future, being that we can only advance web connections, I or someone else could program a module for apache .mkv streaming. I'm don't know where an apache module source is, so I cannot do it at this time.
I leaning between .flv and .mkv. I'm not really concerned about .mp4 because if I want to be future-proof I need .mkv, if I'm not concerned about the future or updates I should stay with .flv.
What do you all think. Would it really be so difficult to program a .mkv streaming module?
Excluding web streaming, which of the 3 would be all around better. Video quality (AAC AVC), file size limits, universal, web support, etc.
Thanks,
You can use the window media streaming platform. After that they will look after your every problem. However ,MP4 with h264 video and aac audio and streamed/played with flash is also good.
Related
I have a service which uses azure media service v3 sdk to upload(and transform)video files. At the moment I am working with some solution for video virus scanning.
Have a question
As they are re-encoding video files to host on their streaming service does it negate the requirement to scan these files?
Thanks
Reencoding video pulls apart the source video by decoding the original audio and video 'in the clear' and then reencoding it. This would limit the attack vectors since the MP4 header would be rewritten, the video and audio are not the original, and a limited amount of metadata gets copied from the old header.
For a virus attack it is less common to hide something in an actual video file and instead just disguise the video as something executable with no video in the file. For example, a .exe file made to appear as a video would not survive the reencoding process since it is not an actual video file. This does not mitigate all risk, but it does mitigate a lot of it.
I have a content creation site I am building and im confused on audio and video.
If I have a content creators audio or video stored in s3 and then I want to display their file will the html video player or audio player stream the media or will it download it fully then play it?
I ask because what if the video or audio is significantly long. like 2 hours for example. I need to know how to solve the use case.
Lastly what file type is most acceptable for viewing on webpages? It seems like MPEG-4 is the best bet. Is that true?
Most video player clients and browsers will attempt to stream the video if they can.
For an mp4 video file hosted on a server, so long as the header is at the start and the server accepts range requests, this will mean the player downloads the video in chunks and starts playing as soon as it has enough to decide the first frames.
For more professional streaming services, they will generally use an adaptive bit rate streaming protocol like DASH or HLS (see this answer: https://stackoverflow.com/a/42365034/334402) and again the video will be streamed in chunks, or segments, and will start playing while it is streaming.
To answer your last question you need to be aware that the raw video is encoded (e.g. h.264, VP9 etc) and the video, audio, subtitle etc tracks stored in a video container (e.g. mp4, Web etc).
The most common format is probaly h.264 encoded and mp4 containers at this time.
The particular profile for h.264 can matter also depending on the device - baseline is probably the most supported profile at this time. You can find examples of media support for different devices online, e.g. for Android: https://developer.android.com/guide/topics/media/media-formats
#Mick's answer is spot on. I'll just add that mp4 (with h264 encoding) will work in just about every browser out there.
The issue with mp4 files (especially with a 2 hour long movie) isn't so much the seeking & streaming. If your creator creates a 4K video - thats what you'll deliver to everyone (even mobile phones). HLS streaming on the other hand has adaptive bitrates - where the video adapts to both the screen & the available network speeds. You'll get better playback results with less buffering (and if you're using AWS - a LOT LESS data egress) with video streaming.
(there are a bunch of APIs and services that can help you do this - including api.video (where I work), Mux and others).
Right now my goal is to grab a streaming video from an IP surveillance camera and display it on a web page.
The camera allows to encode the streaming either in h264 or mjpeg, and transmits it by the RTSP protocol.
The streaming has to be available for several kinds of devices (mainly computers, android smartphones and iphones).
According to my findings it seems like the best option for doing that (in terms of latency) is to transmit the frames of the video through a websocket:
http://phoboslab.org/log/2013/09/html5-live-video-streaming-via-websockets.
Almost all the implementations of this mechanism I've found are based on mjpeg since it's easier to get the video frames.
There's also a h264 player: https://github.com/131/h264-live-player, based on https://github.com/mbebenita/Broadway, which I didn't manage to run ( I would appreciate any help in that respect).
Now the first question is: it is worth trying to work with h264 (since it saves a lot of bandwidth). Or would the h264 decode process probably introduce too much latency?
I would also like to ask if anyone knows a better solution that the one I'm trying to implement.
Finally, where I say "additional information" I mean that I might want to include some additional data associated with some video frames. (something like subtitles or telemetry data).
I am trying to build a website and mobile app (iOS, Android) for the internet radio station.
Website users broadcast their music or radio and mobile users will just listen radio stations and chat with other listeners.
I searched a week and make a prototype with Wowza engine (using HLS and RTMP) and SHOUTcast server on Amazon EC2.
Using HLS has a delay with 5 seconds, but RTMP and SHOUTcast has 2 second delay.
With this result I think I should choose RTMP or SHOUTcast.
But I am not sure RTMP and SHOUTcast are the best protocol. :(
What protocol should I choose?
Do I need to provide a various protocol to cover all platform?
This is a very broad question. Let's start with the distribution protocol.
Streaming Protocol
HLS has the advantage of allowing users to get the stream in the bitrate that is best for their connection. Clients can scale up/down seamlessly without stopping playback. This is particularly important for video, but for audio even mobile clients are capable of playing 128kbit streams in most areas. If you intend to have a variety of bitrates available and want to change quality mid-stream, then HLS is a good protocol for you.
The downside of HLS is compatibility. iOS supports it, but that's about it. Android has HLS support but it is still buggy. (Maybe in another year or two once all the Android 3.0 folks are gone, this won't be as much of an issue.) JWPlayer has some hacks to make HLS work in Flash for desktop users.
I wouldn't bother with RTMP unless you're only concerned with Flash users.
Pure progressive streaming with HTTP is the route I almost always choose to go. Everything can play it. (Even my Palm Pilot's default media player from 12 years ago.) It's simple to implement and well understood.
SHOUTcast is effectively HTTP, but a poorly implemented version that has compatibility issues, particularly on mobile devices. It has a non-standard status line in its response which breaks a lot of clients. Icecast is a good alternative, and is what I would recommend for production use today. As another option, I have created my own streaming service called AudioPump which is HTTP as well, and has been specifically built to fix compatibility with oddball mobile clients, such as native Android players on old hardware. It isn't generally available yet, but you can contact me at brad#audiopump.co if you want to try it.
Latency
You mentioned a latency of 2 seconds being desirable. If you're getting 2-second latency with SHOUTcast, something is wrong. You don't want latency that low, particularly if you're streaming to mobile clients. I usually start with a 20-second buffer at a minimum, which is flushed to the client as fast as it can receive it. This enables immediate starting of the stream playback (as it fills up the client-side buffer so it can begin decoding) while providing some protection against buffer underruns due to network conditions. It's not uncommon for mobile users to walk around the corner of a building and lose their nice signal quality. You want your stream to survive that as best as possible, so if you have already sent the data to cover the drop out, the user doesn't have to know or care that their connection became mediocre for a short period of time.
If you do require low latency, you're looking at the wrong technology entirely. For low latency, check out WebRTC.
You certainly can tweak your traditional internet radio setup to reduce latency, but rarely is that a good idea.
Codec
Codec choice is what will dictate your compatibility more than anything else. MP3 is easily the most compatible, and AAC isn't far behind. If you go with AAC, you get better quality audio for a given bitrate. Most folks use this to reduce their bandwidth bill.
There are licensing fees with MP3, and there may be with AAC depending on what you're using for a codec. Check with a lawyer. I am not one, and the licensing is extremely complicated.
Other codecs include Vorbis and Opus. If you can use Opus, do so as the licensing is wide open and you get good quality for the bandwidth. Client compatibility here though is the killer of Opus. (Maybe in a few years it will be better.) Vorbis is a mediocre codec, but is free and clear.
On the extreme end, I have some stations doing their streaming in FLAC. This is lossless audio quality, but you're paying for 8x the bandwidth as you would with a medium quality MP3 station. FLAC over HTTP streaming compatibility is not code at the moment, but it works alright in VLC.
It is very common to support multiple codecs for your streams. Depending on your budget, if you can't do that, you're best off with MP3.
Finally on encoding, don't go from a lossy codec to another lossy codec if you can help it. Try to get the output stream as close to the input as possible. If you re-encode audio, you lose quality every time.
Recording from Browser
You mentioned users streaming from a browser. I built something like this a couple years ago with the Web Audio API where the audio is captured and then encoded and sent off to Icecast/SHOUTcast servers. Check it out here: http://demo.audiopump.co:3000/ A brief explanation of how it works is here: https://stackoverflow.com/a/20850467/362536
Anyway, I hope this helps you get started.
Streaming straight audio/mpeg (mp3 packets) has worked everywhere I've tried.
If you are developing an APP then go with AAC, if you are simply playing via web browser then you need a HTML5 Implimentation which is MP3. All custom protocols like RTMP or SHOUTcast requires additional UI to be built. There are some third party players available in open source world. You can either use them or stick to HTML5 MP3/OGG as most people now days are using chrome browser or other HTML5 complaint browsers.
I'm developing a video chat-like application using Flash RTMFP and Stratus. So far, I'm having good success. I can build from source, tweak settings, and get video and audio in both directions.
There's one glaring problem I haven't been able to solve, however -- when using a client on a Linux machine, the video received by the other end looks very poor. It's blocky and pixellated, almost as if it's rendering 160x120 in a much larger frame. When sending from a Mac (my other dev machine), the video looks quite good.
I've tried modifying all the settings I can think of -- frame rate, "quality", size, audio settings -- with no discernible improvement. I've tried running it as a local file and from a remote server. The network where I'm working is extremely fast, so that shouldn't be an issue.
Is there anything else I can try? Any suggestions or ideas are greatly appreciated.
Many thanks!
Bad camera or bad camera driver?
Stratus does not change video encoding, it simply is another variation of the RTMFP protocol for transferring exactly the same compressed stream.
One way you can check whether Stratus indeed plays any role in this is to try to stream the same stuff through Adobe Flash Media Server, the development version is free from adobe.com.
I have done Stratus applications, and have not experienced any degradation of video quality compared to Flash Media Server solution. In fact when the camera quality is set to 100, you won't notice the difference between raw camera video and compressed stream when using loopback mode. Apart from possibly limited framerate, if you specify bandwidth (the three are intimately related - bandwidth, framerate, quality, as per documentation of Camera.setQuality or Camera.setMode)