I've seen mentions of WebM being used for audio, but reading the WebM Project site and googling convert mp3 to webm has led me to believe that WebM is just for video. I definitely see lots of WebM to mp3 conversion utilities, but not the reverse.
I can see it's definitely used for video, but how about audio? If it is intended for audio files too, how do I generate a WebM file?
WebM is just a container format and can contain both video and audio:
WebM is a digital multimedia container file format promoted by the
open-source WebM Project. It comprises a subset of the Matroska
multimedia container format.
It can be used for audio-only purposes as well as long as the audio is encoded as Vorbis or Opus. Just specify correct mime-type (ibid.):
Audio-only files SHOULD have a mime of “audio/webm”
To generate such a file a suitable software that support the webm container and its supported codecs has to be used. Unfortunately, support for it can be hard to come by. Typically the OGG container is used when you want to encode audio using the Opus codec as it has much broader support, which may explain the lack of support of webm for audio (as of this being written).
Update: One route to WebM is to use FFMpeg (see this answer), just ignore the video options (-vn).
Related
I've been looking for a solution to stream .avi video files for a while now and I can't find anything.
I found the Plex tool which allows to have a web interface of the media library. And precisely Plex allows the playback of .avi video on its web interface!
I saw that it uses blob:// so it's a file that is segmented I guess?
I was wondering if you have any idea how they do this magic?
I have a content creation site I am building and im confused on audio and video.
If I have a content creators audio or video stored in s3 and then I want to display their file will the html video player or audio player stream the media or will it download it fully then play it?
I ask because what if the video or audio is significantly long. like 2 hours for example. I need to know how to solve the use case.
Lastly what file type is most acceptable for viewing on webpages? It seems like MPEG-4 is the best bet. Is that true?
Most video player clients and browsers will attempt to stream the video if they can.
For an mp4 video file hosted on a server, so long as the header is at the start and the server accepts range requests, this will mean the player downloads the video in chunks and starts playing as soon as it has enough to decide the first frames.
For more professional streaming services, they will generally use an adaptive bit rate streaming protocol like DASH or HLS (see this answer: https://stackoverflow.com/a/42365034/334402) and again the video will be streamed in chunks, or segments, and will start playing while it is streaming.
To answer your last question you need to be aware that the raw video is encoded (e.g. h.264, VP9 etc) and the video, audio, subtitle etc tracks stored in a video container (e.g. mp4, Web etc).
The most common format is probaly h.264 encoded and mp4 containers at this time.
The particular profile for h.264 can matter also depending on the device - baseline is probably the most supported profile at this time. You can find examples of media support for different devices online, e.g. for Android: https://developer.android.com/guide/topics/media/media-formats
#Mick's answer is spot on. I'll just add that mp4 (with h264 encoding) will work in just about every browser out there.
The issue with mp4 files (especially with a 2 hour long movie) isn't so much the seeking & streaming. If your creator creates a 4K video - thats what you'll deliver to everyone (even mobile phones). HLS streaming on the other hand has adaptive bitrates - where the video adapts to both the screen & the available network speeds. You'll get better playback results with less buffering (and if you're using AWS - a LOT LESS data egress) with video streaming.
(there are a bunch of APIs and services that can help you do this - including api.video (where I work), Mux and others).
We would like to support serving Ogg Opus on as many phones as reasonably possible.
Based on Wikipedia and our experimentation, we have found:
Android 5.0+ supports Ogg Opus in Matroska (.mkv,.mka) or WebM (.webm) container format
Android 7.0+ supports Ogg Opus in Ogg (.ogg) container format (with .opus as an alias)
Android 10+ supports Ogg Opus in Ogg (.opus) container format
iOS 11+ supports Ogg Opus in Core Audio Format (.caf) container format
Android documentation states that .ogg and .mkv is supported from 5.0+, but experimentation proves otherwise.
We would like to use AWS S3 to store the audio in some base format (like .mkv) which would be served nativity to Android. For requests from iOS, we would like to redirect those to the AWS Lambda function that would take the base format and repackage the audio stream into the .caf container format. It shouldn't need any transcoding since the codec (Ogg Opus) is the same in both cases.
Any suggestions on how implement an AWS Lambda? We would prefer to use Python, but open to using the other supported languages.
Update:
I was looking at Pydub based on finding this article: Simple Audio Processing in Python With Pydub, but directly using ffmpeg might be better for this.
The conversion is simple enough to use ffmpeg and opusenc (from opus-tools) so the .webm file will be store in S3.
ffmpeg -i foo.mp3 foo.wav
opusenc --bitrate 16 --hard-cbr foo.wav foo.opus
ffmpeg -i foo.opus -acodec copy -f webm foo.webm
The conversion from .webm to .caf can easily be done by ffmpeg as well.
ffmpeg -i foo.webm -acodec copy -f caf foo.caf
I have found a ffmpeg layer for AWS Lambda.
So the question is: how to setup a Lambda that either returns the .webm file stored in S3 or does this conversion via the lambda layer? Should the returned format be based on http headers (i.e. ACCEPT) or file extension?
I ended up using mobile-ffmpeg-full and convert the container from webm to caf.
// If an WebM, use FFmpeg to convert container to CAF
let error = MobileFFmpeg.execute("-i \"\(fileURL.path)\" -acodec copy \"\(cafFileURL.path)\"")
I'm recording screens and webcam video in a Chrome extension using WebRTC but it appears the audio streams in my .mp4 videos are encoded with Opus which causes QuickTime to display an Error -2048: Couldn't open the file video.mp4 because it is not a file that QuickTime understands.
Is it possible to use a different audio encoding option supported by Quicktime?
I don't believe mp4 supports any audio codecs supported by WebRTC.
If possible I would use Matroska, that supports VP8/VP9/H264 and Opus/PCM which will cover pretty much all WebRTC calls.
I have a common use case scenario where I want to do the following
Upload an audio file. (wav/mp3)
Transcodes to 128k or 192k mp3.
Stores the audio asset.
Allows the audio asset to be streamed.
Supports streaming actions such as play pause and seek.
The documentation for azure media services seems like it might be able to support this but I am not too sure, seems like they focus on video content. Anyone have experience with this?
You can manage audio and encode audio only assets with azure media services.
WAV is supported input format/conatiner as a input asset. To see full list of supported formats check following link:
https://azure.microsoft.com/en-us/documentation/articles/media-services-media-encoder-standard-formats/
Check https://github.com/Azure/azure-content/blob/master/articles/media-services/media-services-custom-mes-presets-with-dotnet.md#audio_only to see audio only preset options which you will use to encode an audio only preset.