I'm developing Android application, and im using ffmpeg for conversion of files.
I want my binary file to be as slim as possible since i don't have many input formats and output formats, and my operation is quite basic.And of course not to bloat the APK.
In my program ffmpeg receives a file, and copys the audio stream (-acodec copy), the audio stream will always be aac (mp4a). What i need is to save the stream to file.
My command looks like this : ffmpeg -i {Input} -vn -acodec copy output.aac.
What muxer do i need to for muxing aac to file? I have tried flv,mp3,mov but i always get
Unable to find a suitable output format for 'output.aac', so these options are wrong.
I don't need an encoder for stream copy btw.
Side note: this command work flawlessly on full installation of ffmpeg , but I don't know which muxer it uses. If there is a way to output the muxer it uses from regular ffmpeg run, it would work too.
A common file format for AAC is BMFF/MOV/MP4/M4A. If you specify the m4a file extension, FFmpeg will take care of it for you.
ffmpeg -i {input} -vn -acodec copy output.m4a
If you just want raw AAC, you can use ADTS as a lightweight container of sorts, as Mulvya suggested.
ffmpeg -i {input} -vn -acodec copy -f adts output.aac
I had to add the -f for me to work (on FFmpeg 3.22):
ffmpeg -i {input} -vn -acodec copy -f adts output.m4a
You have to add adts to --enable-muxer when configuring ffmpeg, eg. ./configure --disable-everything (...) --enable-muxer=adts. Then you will be able to save to .aac file
Related
I have a folder with 15 images and 1 audio file:
image_1.jpg, image_2.jpg, image_3.jpg ..... and music.webm
(Also resolution of images is 1440x720)
I want to combine these images into a video with audio in background.And framerate I require is 0.2 (5 second for each frame).I gave a search on Stackoverflow and I found the nearest example and tried.But it failed.
ffmpeg -f image2 -i image%03d.jpg -i music.webm output.mp4
(Actually I have very little knowledge of ffmpeg so please excuse my foolishness)
Please help me with my issue.(Also I didn't understood where in the code I have to enter framerate)
Edit:-If needed I can easily tweak with filename of images.Be free to tell me that too
How your command failed? please paste the text.
According to your image file format, the pattern should be: %3d, but not %03d
e.g.
image_%d.jpg apply to: image_1.jpg, image_2.jpg, image_3.jpg
image_%04d.jpg apply to: image_0001.jpg, image_0002.jpg ... image9999.jpg
and also , when using this "pattern sequence", MAKE SURE:
the sequence should start from xxx001.jpg, otherwise you should specify a parameter -start_index or something.
the sequence must not broken. e.g. image5.jpg, image6.jpg, (missing 7 ) image8.jpg
refer:
https://ffmpeg.org/ffmpeg-formats.html#image2-2
https://en.wikibooks.org/wiki/FFMPEG_An_Intermediate_Guide/image_sequence
Try this:
ffmpeg -r 0.2 -i image_%02d.jpg -i music.webm -vcodec libx264 -crf 25 -preset veryslow -acodec copy video.mkv
So -r specifies fps (I actually didn't try using a float value there, but try it)
-vcodec specifies video codec, -crf quality, preset the encoding speed (slower is more efficient), -acodec copy says audio should be copied.
I think that should work, give it a try. You will need to rename the images to image_01.jpg image_02...
Also have a look here: How to create a video from images with FFmpeg?
I am trying to write a code to download YouTube videos using Pytube on Python 3.6. But for most videos progressive download(Audio and Video in same file) format is available only upto 360p. So I want to download audio and video files separately and combine it. I am able to to download the audio and video files. How can I combine the two file together?
Basically I don't find any method to marge Audio and Video in Pytube but you can use ffmpeg for muxing.
First of all you have to install ffmpeg
ffmpeg installation guide for Windows
for Ubuntu just sudo apt install ffmpeg
Add a dependency ffmpeg-python a python wrapper of ffmpeg
pip install ffmpeg-python
Now we are ready to go with this code snippet
import ffmpeg
video_stream = ffmpeg.input('Of Monsters and Men - Wild Roses.mp4')
audio_stream = ffmpeg.input('Of Monsters and Men - Wild Roses_audio.mp4')
ffmpeg.output(audio_stream, video_stream, 'out.mp4').run()
for more, ffmpeg-python API References
If you keep getting a video without audio, that's because of the adaptive streaming from pytube. A work-around is to download both video and audio... then merge them with ffpmeg.
For instance, something like this to get both audio and video (audio part adapted from here)
from pytube import YouTube
import os
youtube = YouTube('https://youtu.be/ksu-zTG9HHg')
video = youtube.streams.filter(res="1080p").first().download()
os.rename(video,"video_1080.mp4")
audio = youtube.streams.filter(only_audio=True)
audio[0].download()
and then the ffmpeg part (adapted from both here and here) you can set it up on Windows following this procedure and then run something like
ffmpeg -i video.mp4 -i audio.mp4 -c:v copy -c:a aac output.mp4
Merging audio and video using ffmpeg
Once you have downloaded both video and audio files (‘videoplayback.mp4’ and ‘videoplayback.m4a’ respectively), here’s how you can merge them into a single file:
In case of MP4 format (all, except 1440p 60fps & 2160p 60fps):
ffmpeg -i videoplayback.mp4 -i videoplayback.m4a -c:v copy -c:a copy output.mp4
In case of WebM format (1440p 60fps and 2160p 60fps):
ffmpeg -i videoplayback.webm -i videoplayback.m4a -c:v copy -c:a copy output.mkv
Wait until ffmpeg finishes merging audio and video into a single file named "output.mp4".
How do I convert the downloaded audio file to mp3?
you need to execute the following command in the Command Prompt window:
ffmpeg -i INPUT_FILE -ab BITRATE -vn OUTPUT_FILE
Example:
ffmpeg -i videoplayback.m4a -ab 128000 -vn music.mp3
Example:2 (without bit rate)
ffmpeg -i videoplayback.m4a -vn music.mp3
I have a dummy client that is suppose to simulate a video recorder, on this client i want to simulate a video stream; I have gotten so far that i can create a video from bitmap images that i create in code.
The dummy client is a nodejs application running on an Raspberry Pi 3 with the latest version of raspian lite.
In order to use the video I have created, I need to get ffmpeg to dump the video to pipe:1. The problem is that I need the -f rawvideo as a input parameter, else ffmpeg can't understand my video, but when i have that parameter set ffmpeg refuses to write anything to stdio
ffmpeg is running with these parameters
ffmpeg -r 15 -f rawvideo -s 3840x2160 -pixel_format rgba -i pipe:0 -r 15 -vcodec h264 pipe:1
Can anybody help with a solution to my problem?
--Edit
Maybe i sould explain a bit more.
The system i am creating is to be set up in a way, where instead of my stream server ask the video recorder for a video stream, it will be the recorder that tells the server that there is a stream.
I have have slowed my problem on my own. (-:
i now have 2 solutions.
Is to change my -f rawvideo to -f data that works for me anyways.
I can encode my bitmaps as jpeg in code and pipe my jpeg images to stdin. This also requires me to change the ffmpeg parameters to -r 4 -f mjpeg -i pipe:0 -r 4 -vcodec copy -f mjpeg pipe:1 and is by far the slowest thing i have ever done. and i can't use a 4k input
thanks #Mulvya for trying to help.
#eFox Thanks for editing my stupid spelling and grammar mistakes
I am trying to concatenate video files so that next one follows the one before it when it is played. The formatting for all of the files are the same. The files all have audio & video.
I think I am very close (hopefully!) to getting this to work, but I have one final problem. The command below takes all of the mp4 files in my folder and creates a big mp4 file, which is the right size in total MB, but the images for all videos after the first video are garbled. The audio is okay (continues just fine from video to video). Also, I don't get any error messages.
ffmpeg -f concat -i <(for f in /folder1/*.mp4; do echo "file '$f'"; done) -c copy /folder1/all.mp4
I'm not very familiar with ffmpeg yet, so I've just been trying the different suggestions I've found on the web. Can anyone suggest other things for me to try? (I've tried reading the FAQs, but I have to confess that I don't fully understand it. Also, there seems to be some posts about audio being missing after concatenation, but I haven't seen anything on images being garbled.) Thx in advance!
I have had good luck using this ... avconv is a fork of ffmpeg
avconv -i 1.mp4 1.mpeg
avconv -i 2.mp4 2.mpeg
avconv -i 3.mp4 3.mpeg
cat 1.mpeg 2.mpeg 3.mpeg | avconv -f mpeg -i - -vcodec mpeg4 -strict experimental output.mp4
I'm working on a music VOD app on iPhone, and thanks to Apple guidelines, I have to run a HTTP Live Streaming in order to be accepted on the AppStore. But, since Apple doesn't care about 98% of servers on earth, they don't provide their so magical HTTP Live Streaming Tools for Linux-based systems. And from this point, the nightmare starts.
My goal is simple : Take an MP3, segmentate it and generate a simple .m3u8 index file.
I googled "HTTP Live Streaming Linux" and "Oh great ! lots of people have already done that"!
First, I visited the (so famous) post by Carson McDonald.
Result : the svn segmentate.c was old, buggy and a nightmare to compile (Nobody in this world can precise what version of ffmpeg they are using !).
Then I came across the Carson's git repo, but too bad, there is a lot of annoying ruby stuff and live_segmenter.c can't take mp3 files.
Then I searched more deeply. I found this stackoverflow topic, and it's exactly what I want to do. So I have followed the advice from juuni to use this script (httpsegmenter). Result: Impossible to compile anything, 2 days of works and finally I managed to compile it (ffmpeg 8.1 w/ httpsegmenter rev17). And no, this is not a good script, it does take mp3 files, but the ts files generated and the index file can't be read by a player.
Then the author of the post krisbulman, came with a solution, and even gave a patched version of m3u8-segmenter by his own (git repo). I test it : doesn't compile, do nothing. So I took the original version from johnf https://github.com/johnf/m3u8-segmenter. I managed to compile and miracle it works (not really).
I used this command line (ffmpeg 0.8.1):
ffmpeg -er 4 -i music.mp3 -f mpegts -acodec libmp3lame -ar 44100 -ab 128k -vn - | m3u8-segmenter -i - -d 10 -p outputdir/prefix -m outputdir/output.m3u8 -u http://test.com/
This script encode my mp3 file (it takes 4 seconds, too long), and pass it to the m3u8-segmenter to segment it into 10 seconds .TS files.
I tested this stream with Apple's mediastreamvalidator on my mac, and it said that it was OK. So i played it into quicktime, but there is about 0.2 seconds blank between each .TS files !!
So here is my situation, it's a nightmare, I can't get a simple mp3 stream over the HLS protocol. Is there a simple WORKING solution to segmentate a mp3 ? Why can't I directly segmentate the mp3 file into multiple mp3 files like Apple's mediafilesegmenter does?
Use libfaac insteam of libmp3lame which eliminates the 0.2 second break.
Elastic Transcoder Service - if you don't need AES encryption just throw your MP3 in an S3 bucket and be done with it:
http://aws.amazon.com/elastictranscoder/
You can then even add Cloudfront CDN support. (P.S. I fully appreciate your pain, this whole space is a nightmare).
For live streaming only, you should try Nginx with RTMP module for this one. https://github.com/arut/nginx-rtmp-module
Live HLS works pretty good but with looooong buffer.
However, it does not support on-demand HLS streaming.
Piece of module`s config for example
# HLS requires libavformat & should be configured as a separate
# NGINX module in addition to nginx-rtmp-module:
# ./configure ... --add-module=/path/to/nginx-rtmp-module/hls ...
# For HLS to work please create a directory in tmpfs (/tmp/app here)
# for the fragments. The directory contents is served via HTTP (see
# http{} section in config)
#
# Incoming stream must be in H264/AAC/MP3. For iPhones use baseline H264
# profile (see ffmpeg example).
# This example creates RTMP stream from movie ready for HLS:
#
# ffmpeg -loglevel verbose -re -i movie.avi -vcodec libx264
# -vprofile baseline -acodec libmp3lame -ar 44100 -ac 1
# -f flv rtmp://localhost:1935/hls/movie
#
# If you need to transcode live stream use 'exec' feature.
#
application hls {
live on;
hls on;
hls_path /tmp/app;
hls_fragment 5s;
}
What problems were you having with httpsegmenter? It's a single C source file that only links against some libraries provided by ffmpeg (or libav). I maintain a Gentoo ebuild for it, as I use it to time-shift talk radio. If you're running Gentoo, building is as simple as this:
sudo bash -l
layman -S
layman -a salfter
echo media-video/httpsegmenter ~\* >>/etc/portage/package.accept_keywords
emerge httpsegmenter
exit
On Ubuntu, I had to make sure libavutil-dev and libavformat-dev were both installed, so the build looks something like this:
sudo apt-get install libavutil-dev libavformat-dev
git clone https://gitlab.com/salfter/httpsegmenter.git
cd httpsegmenter
make -f Makefile.txt
sudo make -f Makefile.txt install
Once it's built (and once I have an audio source URL), usage is fairly simple: curl to stream the audio, ffmpeg to transcode it from whatever it is at the source (often MP3) to AAC, and segmenter to chunk it up:
curl -m 3600 http://invalid.tld/stream | \
ffmpeg -i - -acodec libvo_aacenc -ac 1 -ab 32k -f mpegts - 2>/dev/null | \
segmenter -i - -d 20 -o ExampleStream -x ExampleStream.m3u8 2>/dev/null
This grabs one hour of streaming audio (needs to be MP3 or AAC, not Flash), transcodes it to 32 kbps mono AAC, and chunks it up for HTTP live streaming. Have it dump into a directory served up by your webserver and you're good to go.
Once the show's done, converting to a single .m4a that can be served up as a podcast is also simple:
cat `ls -rt ExampleStream-*.ts` | \
ffmpeg -i - -acodec copy -absf aac_adtstoasc ExampleStream.m4a 2>/dev/null
I know this is an old question, but I am using this in VLC:
## To start playing the playlist out to the encoder
cvlc -vvv playlist.m3u --sout rtp:127.0.0.1 --ttl 2
## To start the encoder
cvlc rtp:// --sout='#transcode{acodec=mp3,ab=96}:duplicate{dst=std{access=livehttp{seglen=10,splitanywhere=true,delsegs=true,numsegs=15,index=/var/www/vlctest/mystream.m3u8,index-url=http://IPANDPORT/vlctest/mystream-########.ts},mux=ts,dst=/var/www/vlctest/mystream-########.ts},select=audio}'
I had problems if I didn't stream the playlist file to another copy of VLC, the first step is optional if you already have a live streaming source. (but you can use any source for the "encoder" portion).
You could try to use our media services on Windows Azure platform: http://mingfeiy.com/how-to-generate-http-live-streaming-hls-content-using-windows-azure-media-services/
You could encode and stream your video in HLS format by using our portal with no configuration and coding required.
Your English is fine.
Your frustration is apparent.
Q: What's the real issue here? It sounds like you just need a working HLS server, correct? Because of Apple requirements, correct?
Can you use any of the ready-made implementations listed here:
http://en.wikipedia.org/wiki/HTTP_Live_Streaming