I would like to ask about ffmpeg config or command to to mp4 fragment to Azure Media Service live event using smooth streaming / isml protocol. The AMS is not getting any input yet from ffmpeg.
This is my current running command:
ffmpeg -f dshow -i video="Webcam" -movflags isml+frag_keyframe -f isml -r 10 http://endpoint/ingest.isml/streaming1
When I am using RTMP with Wirecast is running well.
Any suggestion on ffmpeg command with isml protocol?
Thank you
it is possibly the way you are formatting the ingest URL. The Smooth ingest protocol expects the name of /Streams(yourtrackname-identifier) after it.
See the Smooth ingest specification for details
Here is an FFMPEG command line that I had sitting around that worked for me on a Raspberry Pi at one time
ffmpeg -i /dev/video1 -pix_fmt yuv420p -f ismv -movflags isml+frag_keyframe -video_track_timescale 10000000 -frag_duration 2000000 -framerate 30 -r 30 -c:v h264_omx -preset ultrafast -map 0:v:0 -b:v:0 2000k -minrate:v:0 2000k -maxrate:v:0 2000k -bufsize 2500k -s:v:0 640x360 -map 0:v:0 -b:v:1 500k -minrate:v:1 500k -maxrate:v:1 500k -s:v:1 480x360 -g 60 -keyint_min 60 -sc_threshold 0 -c:a libfaac -ab 48k -map 0:a? -threads 0 "http://johndeu-nimbuspm.channel.mediaservices.windows.net/ingest.isml/Streams(video)"
Note that i used the following stream identifier - ingest.isml/Streams(video)
Here are a couple more commands that may help.
fmpeg -v debug -y -re -i "file.wmv" -movflags isml+frag_keyframe -video_track_timescale 10000000 -frag_duration 2000000 -f ismv -threads 0 -c:a libvo_aacenc -ac 2 -b:a 20k -c:v libx264 -preset fast -profile:v baseline -g 48 -keyint_min 48 -b:v 200k -s:v 320x240 http://xxxx.userid.channel.mediaservices.windows.net/ingest.isml/Streams(video)
Multi-bitrate encoding and ingest
ffmpeg -re -stream_loop -1 -i C:\Video\tears_of_steel_1080p.mov -movflags isml+frag_keyframe -f ismv -threads 0 -c:a aac -ac 2 -b:a 64k -c:v libx264 -preset fast -profile:v main -g 48 -keyint_min 48 -sc_threshold 0 -map 0:v -b:v:0 5000k -minrate:v:0 5000k -maxrate:v:0 5000k -s:v:0 1920x1080 -map 0:v -b:v:1 3000k -minrate:v:1 3000k -maxrate:v:1 3000k -s:v:1 1280x720 -map 0:v -b:v:2 1800k -minrate:v:2 1800k -maxrate:v:2 1800k -s:v:2 854x480 -map 0:v -b:v:3 1000k -minrate:v:3 1000k -maxrate:v:3 1000k -s:v:3 640x480 -map 0:v -b:v:4 600k -minrate:v:4 600k -maxrate:v:4 600k -s:v:4 480x360 -map 0:a:0
http://.myradarmedia.channel.mediaservices.windows.net/ingest.isml/Streams(stream0^)
EXPLANATION OF WHAT IS GOING ON ABOVE ON THE FFMPEG COMMAND LINE.
ffmpeg
-re **READ INPUT AT NATIVE FRAMERATE
-stream_loop -1 **LOOP INFINITE
-i C:\Video\tears_of_steel_1080p.mov **INPUT FILE IS THIS MOV FILE
-movflags isml+frag_keyframe **OUTPUT IS SMOOTH STREAMING THIS SETS THE FLAGS
-f ismv **OUTPUT ISMV SMOOTH
-threads 0 ** SETS THE THREAD COUNT TO USE FOR ALL STREAMS. YOU CAN USE A STREAM SPECIFIC COUNT AS WELL
-c:a aac ** SET TO AAC CODEC
-ac 2 ** SET THE OUTPUT TO STEREO
-b:a 64k ** SET THE BITRATE FOR THE AUDIO
-c:v libx264 ** SET THE VIDEO CODEC
-preset fast ** USE THE FAST PRESET FOR X246
-profile:v main **USE THE MAIN PROFILE
-g 48 ** GOP SIZE IS 48 frames
-keyint_min 48 ** KEY INTERVAL IS SET TO 48 FRAMES
-sc_threshold 0 ** NOT SURE!
-map 0:v ** MAP THE FIRST VIDEO TRACK OF THE FIRST INPUT FILE
-b:v:0 5000k **SET THE OUTPUT TRACK 0 BITRATE
-minrate:v:0 5000k ** SET OUTPUT TRACK 0 MIN RATE TO SIMULATE CBR
-maxrate:v:0 5000k ** SET OUTPUT TRACK 0 MAX RATE TO SIMULATE CBR
-s:v:0 1920x1080 **SCALE THE OUTPUT OF TRACK 0 to 1920x1080.
-map 0:v ** MAP THE FIRST VIDEO TRACK OF THE FIRST INPUT FILE
-b:v:1 3000k ** SET THE OUTPUT TRACK 1 BITRATE TO 3Mbps
-minrate:v:1 3000k -maxrate:v:1 3000k ** SET THE MIN AND MAX RATE TO SIMULATE CBR OUTPU
-s:v:1 1280x720 ** SCALE THE OUTPUT OF TRACK 1 to 1280x720
-map 0:v -b:v:2 1800k ** REPEAT THE ABOVE STEPS FOR THE REST OF THE OUTPUT TRACKS
-minrate:v:2 1800k -maxrate:v:2 1800k -s:v:2 854x480
-map 0:v -b:v:3 1000k -minrate:v:3 1000k -maxrate:v:3 1000k -s:v:3 640x480
-map 0:v -b:v:4 600k -minrate:v:4 600k -maxrate:v:4 600k -s:v:4 480x360
-map 0:a:0 ** FINALLY TAKE THE SOURCE AUDIO FROM THE FIRST SOURCE AUDIO TRACK.
http://.myradarmedia.channel.mediaservices.windows.net/ingest.isml/Streams(stream0^)
The URL above is part of the output o the command... formatting issue.
Related
I'm using the following command to combine two video files together, overlaying the second one at a certain point in the first file. The result is what I want except the audio from the overlayed file is missing.
ffmpeg.exe -y -hide_banner -ss 00:00:00.067 -i promo.mov -i tag.mov -filter_complex "[1:v]setpts=PTS+6.5/TB[a];[0:v][a]overlay=enable=gte(t\,6.5)[out]" -map [out] -map 0:a -map 1:a -c:v mpeg2video -c:a pcm_s16le -ar 48000 -af loudnorm=I=-20:print_format=summary -preset ultrafast -q:v 0 -t 10 complete.mxf
Without the -map 0:a I get no audio at all, but the second -map 1:a does not pass the audio from -i tag.mov
I have also tried amix but that combines audio from both clips starting at the beginning, and I want the audio from the second file to begin when that file starts overlaying.
It would also be helpful if I could make the audio from the first clip drop lower at the time of the overlay.
amix doesn't support introducing an input mid-way, so the workaround is to add leading silence. You can use the adelay filter to do this.
make the audio from the first clip drop lower at the time of the overlay
This is possible using a sidechaincompressor which takes two inputs and lowers the volume of the first input based on the volume of the second input.
So use,
ffmpeg.exe -y -hide_banner -ss 00:00:00.067 -i promo.mov -i tag.mov -filter_complex "[1:v]setpts=PTS+6.5/TB[1v];[0:v][1v]overlay=enable=gte(t\,6.5)[vout];[1:a]adelay=6.5s,apad,asplit=2[1amix][1aref];[0:a][1aref]sidechaincompress[0asc];[0asc][1amix]amix=inputs=2:duration=first[aout]" -map [vout] -map [aout] -c:v mpeg2video -c:a pcm_s16le -ar 48000 -af loudnorm=I=-20:print_format=summary -preset ultrafast -q:v 0 -t 10 complete.mxf
i want to only convert the audio of a video. The video file has one ac3 5.1 audio stream and i want to make 1 mp3 256k and one AC3 384k out of it.
currently my command looks like this:
.\ffmpeg.exe -i "file-in" -map 0:0 -map 0:1 -map 0:1 -c:v:0 copy -c:a:1 aac -bsf:a aac_adtstoasc -ac 2 -ar 48000 -ab 256k -c:a:1 ac3 -b:a 384k "file1-out"
any idea what im missing here?
Okay, i got it ;)
.\ffmpeg.exe -i "file-in" -map 0:0 -map 0:1 -map 0:1 -c:v:0 copy -c:a:0 aac -bsf:a:0 aac_adtstoasc -ac:a:0 2 -ar 48000 -b:a:0 256k -c:a:1 ac3 -ac:a:1 6 -b:a:1 384k "file-out"
I need to create an FFMPEG script which reads in an audio file ("testloop.wav" in this example) generates a video from the waveform using the "showcqt" filter , and then crops and overlays the output from that to generate a kaleidoscope effect. This is the code I have so far - the generation of the intial video and the output section work correctly, but there is a fault in the split, crop and overlay section which I cannot trace.
ffmpeg -i "testloop.wav" -i "testloop.wav" \
-filter_complex "[0:a]showcqt,format=yuv420p[v]" -map "[v]" \
"split [tmp1][tmp2]; \
[tmp1] crop=iw:(ih/3)*2:0:0, pad=0:ih+ih/2 [top]; \
[tmp2] crop=iw:ih/3:0:(ih/3)*2, hflip [bottom]; \
[top][bottom] overlay=0:(H/3)*2"\
-map 1:a:0 -codec:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p -codec:a aac -strict -2 -b:a 384k -r:a 48000 -movflags faststart "${i%.wav}.mp4
You can't split or define multiple filter_complexes. Also, no need to feed input twice.
ffmpeg -i "testloop.wav" \
-filter_complex "[0:a]showcqt,format=yuv420p, \
split [tmp1][tmp2]; \
[tmp1] crop=iw:(ih/3)*2:0:0, pad=0:ih+ih/2 [top]; \
[tmp2] crop=iw:ih/3:0:(ih/3)*2, hflip [bottom]; \
[top][bottom] overlay=0:(H/3)*2"\
-c:v libx264 -crf 21 -bf 2 -flags +cgop -pix_fmt yuv420p \
-c:a aac -strict -2 -b:a 384k -ar 48000 -movflags +faststart out.mp4
(I'm not debugging the logic of the effect you're trying to achieve. Only the syntax)
I'm trying to syncronise some audio with video - the audio is recorded via an App in the M4U format, at the same time as the recording starts 2 cameras are triggered and start recording. When the recording stops both the audio and cameras stop recording. Now these are out by quite a bit, like at least a second. The file time lengths are the same, but the audio clearly starts recording earlier then the video.
I'm trying to syncronise these and I can do it manually in Audacity, but I'd like to try and get it close via FFMEG.
I've been having a good look around and can find commands for delaying the start of the audio track, but not cutting off the first few seconds of the Audio. I'm trying something like this
ffmpeg -framerate 24 -i .\seq\%%04d.jpg -itsoffset -3 -i audio.m4u -map 0:v -map 1:a -c:v libx264 -pix_fmt yuv420p -c:v libx264 -crf 23 -profile:v high -movflags faststart -b 5000k -shortest out.mp4
Any clues how to remove the first few seconds form the audio input?
The atrim filter does just that:
ffmpeg -framerate 24 -i .\seq\%%04d.jpg -i audio.m4u -map 0:v -map 1:a -af atrim=3,asetpts=N/SR/TB -c:v libx264 -pix_fmt yuv420p -c:v libx264 -crf 23 -profile:v high -movflags faststart -shortest out.mp4
The asetpts is added to reset the timestamps of the trimmed audio.
I have a script that takes in input a video file (generally avi or mp4) and converts it to a "lower quality" mkv video optimized for web streaming.
The ffmpeg command I use is this one:
ffmpeg -fflags +genpts -i file:"$input" -sn -codec:v:0 libx264 -force_key_frames expr:gte\(t,n_forced*5\) -vf "scale=trunc(min(max(iw\,ih*dar)\,1280)/2)*2:trunc(ow/dar/2)*2" -pix_fmt yuv420p -preset superfast -crf 23 -b:v 1680000 -maxrate 1680000 -bufsize 3360000 -vsync vfr -profile:v high -level 41 -map_metadata -1 -threads 8 -codec:a:0 libmp3lame -ac 2 -ab 320000 -af "aresample=async=1" -y "$output"
The problem is that this command only includes the first audio track of my video. I have some dual language videos (italian and english) for which I want to include both languages.
Is there a simple ffmpeg command option that automatically includes all audio tracks found in a video?
Add -map 0:a to include all audio streams.