FFmpeg - Delay Only Video Stream Of Audio Linked dshow Input - audio

I'm having a slight issue trying to sync my audio and video up with an acceptable margin of error. Here is my command:
ffmpeg -y -thread_queue_size 9999 -indexmem 9999 -guess_layout_max 0 -f dshow -video_size 3440x1440 -rtbufsize 2147.48M ^
-framerate 100 -pixel_format nv12 -i video="Video (00 Pro Capture HDMI 4K+)":audio="SPDIF/ADAT (1+2) (RME Fireface UC)" ^
-map 0:0,0:1 -map 0:1 -flags +cgop -force_key_frames expr:gte(t,n_forced*2) -c:v h264_nvenc -preset: llhp -pix_fmt nv12 ^
-b:v 250M -minrate 250M -maxrate 250M -bufsize 250M -c:a aac -ar 44100 -b:a 384k -ac 2 -r 100 -af "aresample=async=250" ^
-vsync 1 -max_muxing_queue_size 9999 -f segment -segment_time 600 -segment_wrap 9 -reset_timestamps 1 ^
C:\Users\djcim\Videos\PC\PC\PC%02d.ts
My problem is the video comes in slightly ahead of the audio, I can use -itsoffset but then I have to call the video and audio as separate inputs as -itsoffset offsets both audio and video. While this may seem the obvious solution it causes inconsistent audio synchronization if the audio isn't called with the video. Basically if both audio and video aren't called at the same time the video can now be ahead or behind with a 2-3 frame margin. When I call them at the same time the video consistently comes in 2 frames ahead of the audio, every time. I just need a way to delay only the video stream without delaying the audio while keeping both audio and video linked from the beginning. I've tried this with no luck:
ffmpeg -y -thread_queue_size 9999 -indexmem 9999 -guess_layout_max 0 -f dshow -video_size 3440x1440 -rtbufsize 2147.48M ^
-framerate 200 -pixel_format nv12 -i video="Video (00 Pro Capture HDMI 4K+)":audio="SPDIF/ADAT (1+2) (RME Fireface UC)" ^
-flags +cgop -force_key_frames expr:gte(t,n_forced*2) -c:v h264_nvenc -preset: llhp -pix_fmt nv12 -b:v 250M ^
-minrate 250M -maxrate 250M -bufsize 250M -c:a aac -ar 44100 -b:a 384k -ac 2 -r 100 ^
-filter_complex "[0:v] setpts=PTS-STARTPTS+.032/TB [v]; [0:a] asetpts=PTS-STARTPTS, aresample=async=250 [a]" -map [v] ^
-map [a] -vsync 1 -max_muxing_queue_size 9999 -f segment -segment_time 600 -segment_wrap 9 -reset_timestamps 1 ^
C:\Users\djcim\Videos\PC\PC\PC%02d.ts
Just like -itsoffset both video and audio are being delayed. You can delay solely audio with adelay, but there doesn't seem to be a video delaying equivalent.
Any help or advice would be greatly appreciated.

As stated by Gyan in the comments, atrim worked. While it isn't delaying the video it is still lining everything up by ditching part of the audio stream.
ffmpeg - y -thread_queue_size 9999 -indexmem 9999 -guess_layout_max 0 -f dshow -video_size 3440x1440 -rtbufsize 2147.48M ^
-framerate 200 -pixel_format nv12 -i video="Video (00 Pro Capture HDMI 4K+)":audio="SPDIF/ADAT (1+2) (RME Fireface UC)" ^
-map 0:0,0:1 -map 0:1 -flags +cgop -force_key_frames expr:gte(t,n_forced*2) -c:v h264_nvenc -preset: llhp -pix_fmt nv12 ^
-b:v 250M -minrate 250M -maxrate 250M -bufsize 250M -c:a aac -ar 44100 -b:a 384k -ac 2 -r 100 ^
-af "atrim=0.038, asetpts=PTS-STARTPTS, aresample=async=250" -vsync 1 -ss 00:00:01.096 -max_muxing_queue_size 9999 ^
-f segment -segment_time 600 -segment_wrap 9 -reset_timestamps 1 C:\Users\djcim\Videos\PC\PC\PC%02d.ts

Related

mapping 3 or more outputs on ffmpeg command

I am using this below command to stream on multiple output it works fine till 2 outputs but when I added third output it's giving an error-
command for 2 output-
ffmpeg_stream = 'ffmpeg -thread_queue_size 1024 -f x11grab -draw_mouse 0 -s 1920x1080 -i :122 -f alsa -i pulse -ac 2 -c:a aac -b:a 64k -threads 0 -flags +global_header -c:v libx264 -pix_fmt yuv420p -s 1920x1080 -threads 0 -f tee -map 0:0 -map 1:0 "[f=flv]rtmps://live-api-s.facebook.com:443/rtmp/stream_key|[f=flv]rtmp://a.rtmp.youtube.com/live2/stream_key"'
command for 3 outputs-
ffmpeg_stream = 'ffmpeg -thread_queue_size 1024 -f x11grab -draw_mouse 0 -s 1920x1080 -i :122 -f alsa -i pulse -ac 2 -c:a aac -b:a 64k -threads 0 -flags +global_header -c:v libx264 -pix_fmt yuv420p -s 1920x1080 -threads 0 -f tee -map 0:0 -map 1:0 -map 2:0 "[f=flv]rtmps://live-api-s.facebook.com:443/rtmp/stream_key|[f=flv]rtmp://a.rtmp.youtube.com/live2/stream_key|[f=flv]rtmp://play.stream.some_domain/stream/i48b-rdq0-jwme-2yj0"'
error-
liveStreaming | Invalid input file index: 2.
error with custom rtmp url only-
liveStreaming | [rtmp # 0x55af7e7f8280] Server error: Already
publishing
liveStreaming | [tee # 0x55af7e12f540] Slave '[f=flv]rtmp://stream.domain/stream/i48b-rdq0-jwme-2yj0': error opening: Operation not permitted
liveStreaming | [tee # 0x55af7e12f540] Slave muxer #1 failed, aborting.
liveStreaming | [flv # 0x55af7e4d1880] Failed to update header with correct duration.
liveStreaming | [flv # 0x55af7e4d1880] Failed to update header with correct filesize.
liveStreaming | Could not write header for output file #0 (incorrect codec parameters ?): Operation not permitted
liveStreaming | Error initializing output stream 0:1 --
liveStreaming | [aac # 0x55af7e1346c0] Qavg: -nan
liveStreaming | [alsa # 0x55af7e111dc0] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
liveStreaming | Conversion failed!
Thank you
-map is used to add input streams for output. You have only 2 inputs. For the 3rd output using tee, you just need to add it inside the output URL like you already did. Remove -map 2:0
ffmpeg_stream = 'ffmpeg -thread_queue_size 1024 -f x11grab -draw_mouse 0 -s 1920x1080 -i :122 -f alsa -i pulse -ac 2 -c:a aac -b:a 64k -threads 0 -flags +global_header -c:v libx264 -pix_fmt yuv420p -s 1920x1080 -threads 0 -f tee -map 0:0 -map 1:0 "[f=flv]rtmps://live-api-s.facebook.com:443/rtmp/stream_key|[f=flv]rtmp://a.rtmp.youtube.com/live2/stream_key|[f=flv]rtmp://play.stream.some_domain/stream/i48b-rdq0-jwme-2yj0"'

ffmpeg output to multiple rtmp simultaneously

i have this situation:
i need stream to 3 different rtmp
1 rtmp is normal all the audio and video
2 rtmp is the video but different audio
3 rtmp is only audio of first video
this my elaboration...
ffmpeg -re -i /usr/VIDEO/my_video.mp4 -i /usr/VIDEO/x_audio.mp3 \
-map 0:v -c:v libx264 -vf format=yuv420p -b:v 2000k -bufsize 3000k -maxrate 2000k -s 1024X576 -g 60 -map 0:a -c:a aac -b:a 192k -ar 44100 -f flv rtmp://my_ip/live/pass \
-map 0:v -c:v libx264 -vf format=yuv420p -b:v 2000k -bufsize 3000k -maxrate 2000k -s 1024X576 -g 60 -map 1:a -streamloop -shortest -f flv rtmp://my_ip/noaudio/pass \
-map 0:a aac -b:a 192k -ar 44100 -f flv rtmp://my_ip/only_audio/pass
i was thinking was ok, but is not.
Where I have misstake
ok find solution
ffmpeg -re -i /usr/VIDEO/my_video.mp4 -re -i /usr/VIDEO/xaudio.mp3 \
-map 0 -c:v libx264 -vf format=yuv420p -b:v 2000k -bufsize 3000k -maxrate 2000k -s 1024X576 -g 60 -c:a aac -b:a 192k -ar 44100 -f flv rtmp://my_ip/live/pass \
-map 0:v -c:v libx264 -vf format=yuv420p -b:v 2000k -bufsize 3000k -maxrate 2000k -s 1024X576 -g 60 -map 1:a -c:a aac -b:a 192k -ar 44100 -f flv rtmp://my_ip/noaudio/pass \
-map 0:a -c:a aac -b:a 192k -ar 44100 -f flv rtmp://my_ip/onlyaudio/pass
and like that gonna work
for looping the audio add -stream_loop (--NUMBER--) before the audio where (--NUMBER--) sis the times that should repeat
once -stream_loop 1
twice 1nce -stream_loop 2
3 times 1nce -stream_loop 3
... and so on
ffmpeg -re -i /usr/VIDEO/my_video.mp4 -stream_loop (--NUMBER--) -re -i /usr/VIDEO/xaudio.mp3 \
-map 0 -c:v libx264 -vf format=yuv420p -b:v 2000k -bufsize 3000k -maxrate 2000k -s 1024X576 -g 60 -c:a aac -b:a 192k -ar 44100 -f flv rtmp://my_ip/live/pass \
-map 0:v -c:v libx264 -vf format=yuv420p -b:v 2000k -bufsize 3000k -maxrate 2000k -s 1024X576 -g 60 -map 1:a -c:a aac -b:a 192k -ar 44100 -f flv rtmp://my_ip/noaudio/pass \
-map 0:a -c:a aac -b:a 192k -ar 44100 -f flv rtmp://my_ip/onlyaudio/pass
Use the tee muxer. It's more complicated but more efficient: you can encode streams once and re-use them for multiple outputs.
ffmpeg -re -i video.mp4 -re -i audio.mp3 -map 0 -map 1:a -c:v libx264 -c:a aac -vf "scale=1024:-2,format=yuv420p" -b:v 2000k -bufsize 3000k -maxrate 2000k -g 60 -b:a 192k -ar 44100 -shortest -f tee "[select=\'v:0,a:0\':f=flv]rtmp://my_ip/normal|[select=\'v:0,a:1\':f=flv]rtmp://my_ip/altaudio|[select=\'a:0\':f=flv:onfail=ignore]rtmp://my_ip/onlyaudio"

Ffmpeg script cuts out sound

I have this ffmpeg script I'm running to automatically convert videos to instagram's accepted coded
The script looks like this:
ffmpeg -analyzeduration 20M -probesize 20M -y -re -f lavfi -i "movie=filename='file.mp4':loop=5, setpts=N/(FRAME_RATE*TB)" -vcodec libx264 -b:v 3500k -vsync 2 -t 59 -acodec aac -b:a 128k -pix_fmt yuv420p -vf "scale=1080:1080:force_original_aspect_ratio=decrease,pad=1080:1080:(ow-iw)/2:(oh-ih)/2:white" -crf 24 new_file.mp4
However that seems to cut out the audio, and I can't seem to find out how to prevent that? I didn't use any -an or anything, and when messing around the audio keeps being cut out? Any idea why?
The movie filter, by default, only reads one video stream from the input.
For looping, stream_loop is available without having to use filters.
ffmpeg -analyseduration 20M -probesize 20M -y -re -stream_loop 5 -i "file.mp4" -vcodec libx264 -b:v 3500k -vsync cfr -t 59 -acodec aac -b:a 128k -pix_fmt yuv420p -vf "scale=1080:1080:force_original_aspect_ratio=decrease,pad=1080:1080:(ow-iw)/2:(oh-ih)/2:white" -crf 24 new_file.mp4

How to create Silent Opus Audio Files

I need silent Opus audio files range of 1 second to 60 minutes.
I found this example for wav files:
60 seconds of silent audio in WAV:
ffmpeg -ar 48000 -t 60 -f s16le -acodec pcm_s16le -ac 2 -i /dev/zero
-acodec copy output.wav
60 seconds of silent audio in MP3:
ffmpeg -ar 48000 -t 60 -f s16le -acodec pcm_s16le -ac 2 -i /dev/zero
-acodec libmp3lame -aq 4 output.mp3
How can I do it for opus with using ffmpeg or similar tool?
Using a recent build of ffmpeg, run
ffmpeg -f lavfi -i anullsrc -ac 2 -ar 48000 -t 30 -c:a libopus file.opus
You can add -vbr 0 -b:a 128k for a CBR encode.

FFMPEG: Get video and audio stream from different devices

I have an Ubuntu 14.04 machine with a connected video camera. The video camera is connected to the PC with a magewell HDMI-USB3 converter. On the same PC a microphone is connected via the analog microphone slot.
When I call arecord-l I get the following audio devices:
arecord -l
**** Liste der Hardware-Geräte (CAPTURE) **** Karte 1: PCH [HDA Intel PCH], Gerät 0: ALC283 Analog [ALC283 Analog]   Sub-Geräte: 1/1  
> Sub-Gerät #0: subdevice #0 Karte 2: XI100DUSBHDMI [XI100DUSB-HDMI],
> Gerät 0: USB Audio [USB Audio]   Sub-Geräte: 1/1   Sub-Gerät #0:
> subdevice #0
I try to create a flv video stream and send it to a nginx web server with the rtmp plugin using the following command:
ffmpeg -f video4linux2 -framerate 30 -video_size 1280x720 -i /dev/video0 -vcodec libx264 -preset ultrafast -pix_fmt yuv420p -video_size 1280x720 -threads 0 -f flv rtmp://192.168.0.36/live/test
I am now able to stream the video.
Now I also want to mix the audio sound to the video by sending the following command:
ffmpeg -f alsa -i hw:1 -f video4linux2 -framerate 30 -video_size 1280x720 -i /dev/video0 -vcodec libx264 -preset ultrafast -pix_fmt yuv420p -video_size 1280x720 -threads 0 -ar 11025 -f flv rtmp://192.168.0.36/live/test
But now unfortunately I get only the audio sound from the alsa device, but I can see no video stream
Now I tried to write the stream only to a file instead of sending it to a ramp server:
ffmpeg -f alsa -i hw:1 -f video4linux2 -i /dev/video0 out.mpg
If I start the file now with a media player, I get the audio and video.
How do I have to change my FFMPEG parameters to stream both: audio and video device input to a new flv stream?
As I mentioned above, the command line call
ffmpeg -f alsa -i hw:1 -f video4linux2 -framerate 30 -video_size 1280x720 -i /dev/video0 -vcodec libx264 -preset ultrafast -pix_fmt yuv420p -video_size 1280x720 -threads 0 -ar 11025 -f flv rtmp://192.168.0.36/live/test
is only encoding and streaming the audio sound, but not the video.

Resources