I am working on FFMPEG, I read that http://dranger.com/ffmpeg/ article which I understand that FFMPEG doesn't download the file before processing, FFMPEG play the file through ffmplayer or any other player, I want to exactly make sure about FFMPEG, that how it works?
1) It can download the file first and then make instance
OR
2) The file play and during play through FFMPEG Player make instance or conversion
Which point is correct?
If someone knows that, it will be very helpful for others and also me .. :) Thanks in Advance
FFmpeg is a media processing utility. Like most Unix tools, you give it an input to produce an output. It does not grab sources on its own so, no, it will not download anything by itself.
Read the man page for more information about on ffmpeg.
Alternatively, run man ffmpeg!
Related
Currently, I am implementing a new feature of my software using the Libav API. This is the requirement: to merge a list of audio files (MP3 and WAV) and create a unique
audio file (MP3) as output. Note: The challenge is not about concatenating files, but merging them. When the output sound is played, all the input audio content must sound at the same time, as when you merge several files in a video editor.
I was researching about Libav audio streams, and I am just guessing that my requirement is related to the "channels" concept, I mean, that there is possible to include several audios in the stream, using one channel per audio or something like that. I was hoping to find more information about this topic, but FFmpeg/Libav documentation is actually scarce.
Right now, I am able to merge several audio streams to a video stream successfully and I can create a playable MP4 file. My problem is that players like MPlayer/VLC only reproduce the first audio stream with the video, the other two audio streams are ignored.
I was looking at the set of examples included in the FFmpeg source code, but there is nothing specifically related to my requirement, so I would appreciate any
source code reference or algorithm explanation about how to merge several audio files into one using libav. Thanks.
Update:
The ffmpeg command to merge several audio files requires de filter flag "amix", like in this example:
ffmpeg -i 1.mp3 -i 2.mp3 -i 3.mp3 -filter_complex amix=inputs=3:duration=first result.mp3
All the syntax related to this option is described in the FFmpeg Documentation
Checking the FFmpeg source code, it seems the amix feature implementation is included in the file af_amix.c
I am not 100% sure, but it seems the general algorithm is described in the function:
static int activate(AVFilterContext *ctx)
Do you know how to merge several audio files using command line ffmpeg? It would help you if you first understand how to do it with the ffmpeg command then reverse engineer how it achieves it. It's all about how to constrct a filtergraph and pass data through it.
As for examples, check out examples/filter_audio.c and examples/filtering_audio.c
This C example gets two WAV audio files and merges them to generate a new WAV file using ffmpeg-4.4 API. Tip: The key of the process is to use these filters: abuffer, amix and abuffersink.
https://github.com/xtingray/audio_mixer/
Although it doesn't support MP3 format as the output, it gives you the basics to understand how to implement your own requirements. I hope it can be handy for anyone looking for references about this specific topic.
I've been asked to sample some data in a .wac file type. I'm not familiar with this standard and there is very little on the internet with regards to this format. I got given the .wav file but I don't think it was converted correctly, in that there was a none existent of the RIFF header so no .wav reader was able to read it.
Could anyone therefore shed some light into how I could possibly convert the .wac file into a .wav file? Doing some research, I cannot seem to find a converter tool on the internet, and, MatLab does not have a module for reading in .wac data.
NOTE: I've put the tag "game-engine" because according to this website: Here it is used in the infinity game engine.
I've come up with the following solution, however, massive thanks to #jpaari for his input.
Basically, I used sox:
sox -r 44100 -e unsigned -b 8 -c 1 input.raw output.wav
I was able to re-name the file to .raw and this worked. I'm going to update the Sample Rate to what #Aybe posted.
Try this http://www.shsforums.net/topic/39117-ps-gui-v304/
I think Audacity can do it aswell. Also the "unity3d" tag is not quite right.
I'm going to convert a swf to mp4/flv or so with batch line in linux.
I've tried ffmpeg, mencoder or a perl script FLV::info. But all these convert just the video in the swf(maybe encoded by H.263 or so) to a new video, but no movie clips, not mention the ActionScripts.
I find moyea seems to fit my needs(however, I need linux ones), but is there any free ways to do this?
Many thanks.
I am using Direct show samples (AMCap) to capture live video streams. Video seems to be perfect but it does not capture audio within it.
I am not able to find out the reason. Can anyone please help me to solve this problem?
Thank You.
Earlier SDKs, e.g. Microsoft® DirectX® 9.0 SDK Update (October 2004), contained more samples including audio capture, e.g.:
\DirectShow\Samples\C++\DirectShow\Capture\AudioCap
AudioCap
NOTE: In order to write .WAV files to your disk, you must first build and register the WavDest filter in the
Samples\Multimedia\DirectShow\Filters\WAVDest directory. Without this
filter, you may audition audio input, but you will not be able to
write it to your disk.
How to decode mp3 audio using ffmpeg (using the API)? If not complicated - example code?
PS I went to open the file, find the audio channels. then I do not understand what to do ...
It's nice to go all the way through the tutorials but this part of the tutorial deals with decoding audio (although I am currently having a problem with it as avcodec_decode_audio3() is the updated version of avcodec_decode_audio2()).
Hope it helps,
Infinitifizz