Streaming Ogg Opus as MKV and CAF with AWS Lambda? - audio

We would like to support serving Ogg Opus on as many phones as reasonably possible.
Based on Wikipedia and our experimentation, we have found:
Android 5.0+ supports Ogg Opus in Matroska (.mkv,.mka) or WebM (.webm) container format
Android 7.0+ supports Ogg Opus in Ogg (.ogg) container format (with .opus as an alias)
Android 10+ supports Ogg Opus in Ogg (.opus) container format
iOS 11+ supports Ogg Opus in Core Audio Format (.caf) container format
Android documentation states that .ogg and .mkv is supported from 5.0+, but experimentation proves otherwise.
We would like to use AWS S3 to store the audio in some base format (like .mkv) which would be served nativity to Android. For requests from iOS, we would like to redirect those to the AWS Lambda function that would take the base format and repackage the audio stream into the .caf container format. It shouldn't need any transcoding since the codec (Ogg Opus) is the same in both cases.
Any suggestions on how implement an AWS Lambda? We would prefer to use Python, but open to using the other supported languages.
Update:
I was looking at Pydub based on finding this article: Simple Audio Processing in Python With Pydub, but directly using ffmpeg might be better for this.
The conversion is simple enough to use ffmpeg and opusenc (from opus-tools) so the .webm file will be store in S3.
ffmpeg -i foo.mp3 foo.wav
opusenc --bitrate 16 --hard-cbr foo.wav foo.opus
ffmpeg -i foo.opus -acodec copy -f webm foo.webm
The conversion from .webm to .caf can easily be done by ffmpeg as well.
ffmpeg -i foo.webm -acodec copy -f caf foo.caf
I have found a ffmpeg layer for AWS Lambda.
So the question is: how to setup a Lambda that either returns the .webm file stored in S3 or does this conversion via the lambda layer? Should the returned format be based on http headers (i.e. ACCEPT) or file extension?

I ended up using mobile-ffmpeg-full and convert the container from webm to caf.
// If an WebM, use FFmpeg to convert container to CAF
let error = MobileFFmpeg.execute("-i \"\(fileURL.path)\" -acodec copy \"\(cafFileURL.path)\"")

Related

WebRTC recording with AAC encoding

I'm recording screens and webcam video in a Chrome extension using WebRTC but it appears the audio streams in my .mp4 videos are encoded with Opus which causes QuickTime to display an Error -2048: Couldn't open the file video.mp4 because it is not a file that QuickTime understands.
Is it possible to use a different audio encoding option supported by Quicktime?
I don't believe mp4 supports any audio codecs supported by WebRTC.
If possible I would use Matroska, that supports VP8/VP9/H264 and Opus/PCM which will cover pretty much all WebRTC calls.

How do I capture an mpeg-dash stream using Python3 opencv?

I have a URL that links to an MPEG-DASH stream (https://something.com/manifest.mpd). I would like to capture this stream to work with the frames with OpenCV on Python3, which I have installed using pip3. How would I do this?
I have already tried cv2.VideoCapture(URL), but this does not work.
you can try Vidgear. It supports mpeg dash format but it has yet to incorporate Apple HLS format. In case you want to work on a scalable solution, you can use amazon AWS Media Convert, which can convert your source files to any format such as m3u8 or mpd. You can use AWS Media Live to do the same thing for live streams.

Is WebM for audio too, or just video?

I've seen mentions of WebM being used for audio, but reading the WebM Project site and googling convert mp3 to webm has led me to believe that WebM is just for video. I definitely see lots of WebM to mp3 conversion utilities, but not the reverse.
I can see it's definitely used for video, but how about audio? If it is intended for audio files too, how do I generate a WebM file?
WebM is just a container format and can contain both video and audio:
WebM is a digital multimedia container file format promoted by the
open-source WebM Project. It comprises a subset of the Matroska
multimedia container format.
It can be used for audio-only purposes as well as long as the audio is encoded as Vorbis or Opus. Just specify correct mime-type (ibid.):
Audio-only files SHOULD have a mime of “audio/webm”
To generate such a file a suitable software that support the webm container and its supported codecs has to be used. Unfortunately, support for it can be hard to come by. Typically the OGG container is used when you want to encode audio using the Opus codec as it has much broader support, which may explain the lack of support of webm for audio (as of this being written).
Update: One route to WebM is to use FFMpeg (see this answer), just ignore the video options (-vn).

Encoding wav to flac and streaming through node.js

Am trying to create an application which will take data as raw audio wav format and output as FLAC.
Now, I need to stream the input and the output at the same time through node.
Can someone guide me on how can I work this out?
Thanks
As far as I know there is no possibility to do so "live" via the Internet. You can't do so with FLAC format. Only AAC, MP3 an WAV are supported for live-streaming.
You can download the file on the client side and then the client using his or het apps is able to play it.

qt faststart and ffmpeg to generate a live mp4 file [duplicate]

This question already has answers here:
Live video streaming using progressive download (and not RTMP) in Flash
(2 answers)
Closed 9 years ago.
I am using ffmpeg to create an mp4 file on my server. I am also trying to use qt fast start to be able to move the moov atom to the front so it will stream. I have searched all over the internet with no luck. Is it possible to put my video/audio in a mp4 buffer type file and then be able to play it while ffmpeg is still dumping video and audio data into the stream? the point is I am trying to stream from a camera and Android is horrid... I know both ios and android support mp4 so I was trying to figure a way I can make my rtsp Mp4.
main point of the story: I want to continuously feed my mp4 container my camera feed and still be able to playback the file os my clients can watch.
any help appreciated thank you.
You can publish a live stream and when the stream has ended you publish the progressive download.
In FFmpeg, to stream live and save a duplicate of that stream into a file at the same time without encoding twice you can use the Tee pseudo-mixer. Something like this:
ffmpeg \
-i <input-stream> \
-f tee "[movflags=+faststart]output.mp4|http://<ffserver>/<feed_name>"
Update: You might try to directly stream a fragmented mp4.
Update 2:
Create a fragmented mp4:
ffmpeg -i input -frag_duration 1000 stream.mp4
Normally, when serving a file using a web server it will want to know the file size, so to serve the file without knowing it's size, you need to configure your web server to do Chunked Transfer Encoding.

Resources