flv is not directly supported by most mobile browsers,
so I want to convert to the mp4/ogg format.
Is there anyhow I can achieve it with FMS that generated the .flv file from live webcam stream?
UPDATE
I found a similar question here which partly does the job:
ffmpeg -i input.flv output.mp4
But I need streaming
I assume you mean Ogg Vorbis audio with AVC/h.264 video in an FLV container? If so, the only problem is that the Flash Player does not support vorbis playback nor is there a codec id for it in the FLV specification. There is however an Alchemy plugin which does decode Ogg but it is not for streaming from FMS and certainly not within FLV. Info on the Flash/Ogg decoder:
http://www.hydrogenaudio.org/forums/lofiversion/index.php/t66269.html
Media types for FLV may be found here, as well as other useful information:
http://en.wikipedia.org/wiki/Flash_Video
Summary:
Supported media types in FLV file format
Video: On2 VP6, Sorenson Spark (Sorenson H.263), Screen video, H.264
Audio: MP3, ADPCM, Linear PCM, Nellymoser, Speex, AAC, G.711 (reserved for internal use)
Supported media types in F4V file format
Video: H.264
Images (still frame of video data): GIF, PNG, JPEG
Audio: AAC, HE-AAC, MP3
By the way, I found your question because I am implementing Ogg/Ogv streaming in Red5 (http://code.google.com/p/red5) for HTML5 and Unity.
Related
I am trying to encode raw audio (pcm_f32le) to AAC encoded audio. One thing I've noticed is that I can accomplish this via the CLI tool:
ffmpeg -f f32le -ar 48000 -ac 2 -c:a pcm_f32le -i out.raw out.m4a -y
This plays just fine and decodes fine.
The steps I've taken:
When I am using the C example code: https://ffmpeg.org/doxygen/3.4/encode_audio_8c-example.html and switch the encoder to codec = avcodec_find_encoder(AV_CODEC_ID_AAC);
Output the various sample formats associated with AAC, it only provides FLTP. That assumes a planar/interleaved format.
This page seems to provide the various supported input formats per codec.
This is confusing because I don't think my raw captured audio is interleaved. I've certainly tried passing it through and it doesn't work as intended.
It will stay stuck here with this ret code indefinitely after calling avcodec_receive_packet:
AVERROR(EAGAIN): output is not available in the current state - user must try to send input
Questions:
How can I modify the example code from FFmpeg to convert pcm_f32le raw audio to AAC encoded audio?
Why is the CLI tool able to?
I am using libsoundio to capture raw audio from Linux's Dummy Output. I wonder how I could get a planar format to pass through to get AAC encoded audio.
If AAC is not a possibility, is doing so with MP3?
Find here a working example of how to encode raw pcm_f32le to aac with ffmpeg
I have gotten a set of FLAC (audio) files from a friend. I copied them to my Sonos music library, and got set to enjoy a nice album. Unfortunately, Sonos would not play the files. As a result I have been getting to know ffmpeg.
Sonos' complaint with the FLAC files was that it was "encoded at an unsupported sample rate". With rolling eyes and shaking head, I note that the free VLC media player happily plays these files, but the product I've paid for (Sonos) - does not. But I digress...
ffprobe revealed that the FLAC files contain both an Audio channel and a Video channel:
$ ffprobe -hide_banner -show_streams "/path/to/Myaudio.flac"
Duration: 00:02:23.17, start: 0.000000, bitrate: 6176 kb/s
Stream #0:0: Audio: flac, 176400 Hz, stereo, s32 (24 bit)
Stream #0:1: Video: mjpeg (Progressive), yuvj444p(pc, bt470bg/unknown/unknown), 450x446 [SAR 72:72 DAR 225:223], 90k tbr, 90k tbn, 90k tbc (attached pic)
Metadata:
comment : Cover (front)
Cool! I guess this is how some audio players are able to display the 'album artwork' when they play a song? Note also that the Audio stream is reported at 176400 Hz! Apparently I'm out of touch; I thought that 44.1khz sampling rate effectively removed all of the 'sampling artifacts' we could hear. Anyway, I learned that Sonos would support a max of 48kHz sampling rate, and this (the 176.4kHz rate) is what Sonos was unhappy about. I used ffmpeg to 'dumb it down' for them:
$ ffmpeg -i "/path/to/Myaudio.flac" -sample_fmt s32 -ar 48000 "/path/to/Myaudio48K.flac"
This seemed to work - at least I got a FLAC file that Sonos would play. However, I also got what looks like a warning of some sort:
[swscaler # 0x108e0d000] deprecated pixel format used, make sure you did set range correctly
[flac # 0x7feefd812a00] Frame rate very high for a muxer not efficiently supporting it.
Please consider specifying a lower framerate, a different muxer or -vsync 2
A bit more research turned up this answer which I don't quite understand, and then in a comment says, "not to worry" - at least wrt the swscaler part of the warning.
And that (finally) brings me to my questions:
1.a. What framerate, muxer & other specifications make a graphic compatible with a majority of programs that use the graphic?
1.b. How should I use ffmpeg to modify the Video channel to set these specifications (ref. Q 1.a.)?
2.a. How do I remove the Video channel from the .flac audio file?
2.b. How do I add a Video channel into a .flac file?
EDIT:
I asked the above (4) questions after failing to accomplish a 'direct' conversion (a single ffmpeg command) from FLAC at 176.4 kHz to ALAC (.m4a) at 48 kHz (max supported by Sonos). I reasoned that an 'incremental' approach through a series of conversions might get me there. With the advantage of hindsight, I now see I should have posted my original failed direct conversion incantation... we live and learn.
That said, the accepted answer below meets my final objective to convert a FLAC file encoded at 176.4kHz to an ALAC (.m4a) at 48kHz, and preserve the cover art/video channel.
What framerate, muxer & other specifications make a graphic compatible with a majority of programs that use the graphic?
A cover art is just a single frame so framerate has no relevance in this case. However, you don't want a video stream, it has to remain a single image, so -vsync 0 should be added. Muxer is simply the specific term for the packager as used in media file processing. It is decided by the choice of format e.g. FLAC, WAV..etc. What's important is the codec for the cover art; usually, it's PNG or JPEG. For FLAC, PNG is the default codec.
How do I remove the Video channel from the .flac audio file
ffmpeg -i "/path/to/Myaudio.flac" -vn -c copy "/path/to/Myaudio48K.flac"
(All this does is skip any video in the input and copy everything else)
How do I add a Video channel into a .flac file?
To add cover art to audio-only formats, like MP3, FLAC..etc, the video stream has to have a disposition of attached picture. So,
ffmpeg -i "/path/to/Myaudio.flac" -i coverimage -sample_fmt s32 -ar 48000 -disposition:v attached_pic -vsync 0 "/path/to/Myaudio48K.flac"
For direct conversion to ALAC, use
ffmpeg -i "/path/to/Myaudio.flac" -i coverimage -ar 48000 -c:a alac -disposition:v attached_pic -vsync 0 -c:v png "/path/to/Myaudio48K.m4a"
I am trying to write audio encoded packets into a MP4 container.I have followed this sample code and instead of creating dummy frame, I am feeding real G.711 PCMU encoded frame into ffmpeg. The writing seems working and file size is increasing, but the mp4 is not playing using ffplay or in VLC player.
Thanks in advance!
G.711 PCM encoded data is not supported by mp4 container. So I used mov multimedia container instead. And for mp4, I transcoded PCM into AAC which is supported by mp4. See this for details.
Is there any difference between M4A audio files and AAC audio files or are they exactly
the same thing but with a different file extension?
.M4A files typically contain audio only and are formatted as MPEG-4 Part 14 files (.MP4 container).
.AAC is not a container format and instead it is a raw MPEG-4 Part 3 bitstream with audio stream encoded.
Note that M4A does not have to contain exactly AAC audio, there are other valid options as well.
There are raw video and audio streams, this streams cannot be played directly on most video/audio player, they need to be "encapsulated" on a transport, a raw H.264 video stream and a raw AAC audio stream need to be inside a MP4 encapsulator, it can be also inside an AVI or MOV encapsulator.
A MP4 file can contain a H.264 video stream and/or an AAC audio stream, but for some reason someone decided that a MP4 file that contains video and audio use the file extension M4V (v for video) and if it is an MP4 file that only contains audio to use the M4A extension, that is a common practice in other encapsulators like Windows Media which use WMV and WMA, or OGG which use OGV and OGA, silly as it seems.
So a file that has a M4A file extension is an MP4 file that can contain a AAC audio track but it is not always the case, that's why programs like mediainfo become handy to know what is inside a file.
They are not the same thing.
An .m4a file is basically the same thing as an mp4; it is only a container format. codec != container It does not imply a codec, and therefore it can only contain mp3, ac3 or any other audio codec.
An .aac file contains concatenated AAC frames pre-pended with ADTS headers (and optionally an ID3 tag).
I have tried to capture audio from live streaming by using audio capture device with MONOGRAM
AAC Encoder downloaded from http://blog.monogram.sk/janos/2007/12/11/free-aac-encoder-filter/,
but it generates audio file with 1kb of size without audio.
Can anyone tell me the reason of this?
Thank You.