FFMPEG: Get video and audio stream from different devices - audio

I have an Ubuntu 14.04 machine with a connected video camera. The video camera is connected to the PC with a magewell HDMI-USB3 converter. On the same PC a microphone is connected via the analog microphone slot.
When I call arecord-l I get the following audio devices:
arecord -l
**** Liste der Hardware-Geräte (CAPTURE) **** Karte 1: PCH [HDA Intel PCH], Gerät 0: ALC283 Analog [ALC283 Analog]   Sub-Geräte: 1/1  
> Sub-Gerät #0: subdevice #0 Karte 2: XI100DUSBHDMI [XI100DUSB-HDMI],
> Gerät 0: USB Audio [USB Audio]   Sub-Geräte: 1/1   Sub-Gerät #0:
> subdevice #0
I try to create a flv video stream and send it to a nginx web server with the rtmp plugin using the following command:
ffmpeg -f video4linux2 -framerate 30 -video_size 1280x720 -i /dev/video0 -vcodec libx264 -preset ultrafast -pix_fmt yuv420p -video_size 1280x720 -threads 0 -f flv rtmp://192.168.0.36/live/test
I am now able to stream the video.
Now I also want to mix the audio sound to the video by sending the following command:
ffmpeg -f alsa -i hw:1 -f video4linux2 -framerate 30 -video_size 1280x720 -i /dev/video0 -vcodec libx264 -preset ultrafast -pix_fmt yuv420p -video_size 1280x720 -threads 0 -ar 11025 -f flv rtmp://192.168.0.36/live/test
But now unfortunately I get only the audio sound from the alsa device, but I can see no video stream
Now I tried to write the stream only to a file instead of sending it to a ramp server:
ffmpeg -f alsa -i hw:1 -f video4linux2 -i /dev/video0 out.mpg
If I start the file now with a media player, I get the audio and video.
How do I have to change my FFMPEG parameters to stream both: audio and video device input to a new flv stream?
As I mentioned above, the command line call
ffmpeg -f alsa -i hw:1 -f video4linux2 -framerate 30 -video_size 1280x720 -i /dev/video0 -vcodec libx264 -preset ultrafast -pix_fmt yuv420p -video_size 1280x720 -threads 0 -ar 11025 -f flv rtmp://192.168.0.36/live/test
is only encoding and streaming the audio sound, but not the video.

Related

Ffmpeg script cuts out sound

I have this ffmpeg script I'm running to automatically convert videos to instagram's accepted coded
The script looks like this:
ffmpeg -analyzeduration 20M -probesize 20M -y -re -f lavfi -i "movie=filename='file.mp4':loop=5, setpts=N/(FRAME_RATE*TB)" -vcodec libx264 -b:v 3500k -vsync 2 -t 59 -acodec aac -b:a 128k -pix_fmt yuv420p -vf "scale=1080:1080:force_original_aspect_ratio=decrease,pad=1080:1080:(ow-iw)/2:(oh-ih)/2:white" -crf 24 new_file.mp4
However that seems to cut out the audio, and I can't seem to find out how to prevent that? I didn't use any -an or anything, and when messing around the audio keeps being cut out? Any idea why?
The movie filter, by default, only reads one video stream from the input.
For looping, stream_loop is available without having to use filters.
ffmpeg -analyseduration 20M -probesize 20M -y -re -stream_loop 5 -i "file.mp4" -vcodec libx264 -b:v 3500k -vsync cfr -t 59 -acodec aac -b:a 128k -pix_fmt yuv420p -vf "scale=1080:1080:force_original_aspect_ratio=decrease,pad=1080:1080:(ow-iw)/2:(oh-ih)/2:white" -crf 24 new_file.mp4

FFmpeg - Delay Only Video Stream Of Audio Linked dshow Input

I'm having a slight issue trying to sync my audio and video up with an acceptable margin of error. Here is my command:
ffmpeg -y -thread_queue_size 9999 -indexmem 9999 -guess_layout_max 0 -f dshow -video_size 3440x1440 -rtbufsize 2147.48M ^
-framerate 100 -pixel_format nv12 -i video="Video (00 Pro Capture HDMI 4K+)":audio="SPDIF/ADAT (1+2) (RME Fireface UC)" ^
-map 0:0,0:1 -map 0:1 -flags +cgop -force_key_frames expr:gte(t,n_forced*2) -c:v h264_nvenc -preset: llhp -pix_fmt nv12 ^
-b:v 250M -minrate 250M -maxrate 250M -bufsize 250M -c:a aac -ar 44100 -b:a 384k -ac 2 -r 100 -af "aresample=async=250" ^
-vsync 1 -max_muxing_queue_size 9999 -f segment -segment_time 600 -segment_wrap 9 -reset_timestamps 1 ^
C:\Users\djcim\Videos\PC\PC\PC%02d.ts
My problem is the video comes in slightly ahead of the audio, I can use -itsoffset but then I have to call the video and audio as separate inputs as -itsoffset offsets both audio and video. While this may seem the obvious solution it causes inconsistent audio synchronization if the audio isn't called with the video. Basically if both audio and video aren't called at the same time the video can now be ahead or behind with a 2-3 frame margin. When I call them at the same time the video consistently comes in 2 frames ahead of the audio, every time. I just need a way to delay only the video stream without delaying the audio while keeping both audio and video linked from the beginning. I've tried this with no luck:
ffmpeg -y -thread_queue_size 9999 -indexmem 9999 -guess_layout_max 0 -f dshow -video_size 3440x1440 -rtbufsize 2147.48M ^
-framerate 200 -pixel_format nv12 -i video="Video (00 Pro Capture HDMI 4K+)":audio="SPDIF/ADAT (1+2) (RME Fireface UC)" ^
-flags +cgop -force_key_frames expr:gte(t,n_forced*2) -c:v h264_nvenc -preset: llhp -pix_fmt nv12 -b:v 250M ^
-minrate 250M -maxrate 250M -bufsize 250M -c:a aac -ar 44100 -b:a 384k -ac 2 -r 100 ^
-filter_complex "[0:v] setpts=PTS-STARTPTS+.032/TB [v]; [0:a] asetpts=PTS-STARTPTS, aresample=async=250 [a]" -map [v] ^
-map [a] -vsync 1 -max_muxing_queue_size 9999 -f segment -segment_time 600 -segment_wrap 9 -reset_timestamps 1 ^
C:\Users\djcim\Videos\PC\PC\PC%02d.ts
Just like -itsoffset both video and audio are being delayed. You can delay solely audio with adelay, but there doesn't seem to be a video delaying equivalent.
Any help or advice would be greatly appreciated.
As stated by Gyan in the comments, atrim worked. While it isn't delaying the video it is still lining everything up by ditching part of the audio stream.
ffmpeg - y -thread_queue_size 9999 -indexmem 9999 -guess_layout_max 0 -f dshow -video_size 3440x1440 -rtbufsize 2147.48M ^
-framerate 200 -pixel_format nv12 -i video="Video (00 Pro Capture HDMI 4K+)":audio="SPDIF/ADAT (1+2) (RME Fireface UC)" ^
-map 0:0,0:1 -map 0:1 -flags +cgop -force_key_frames expr:gte(t,n_forced*2) -c:v h264_nvenc -preset: llhp -pix_fmt nv12 ^
-b:v 250M -minrate 250M -maxrate 250M -bufsize 250M -c:a aac -ar 44100 -b:a 384k -ac 2 -r 100 ^
-af "atrim=0.038, asetpts=PTS-STARTPTS, aresample=async=250" -vsync 1 -ss 00:00:01.096 -max_muxing_queue_size 9999 ^
-f segment -segment_time 600 -segment_wrap 9 -reset_timestamps 1 C:\Users\djcim\Videos\PC\PC\PC%02d.ts

How to create Silent Opus Audio Files

I need silent Opus audio files range of 1 second to 60 minutes.
I found this example for wav files:
60 seconds of silent audio in WAV:
ffmpeg -ar 48000 -t 60 -f s16le -acodec pcm_s16le -ac 2 -i /dev/zero
-acodec copy output.wav
60 seconds of silent audio in MP3:
ffmpeg -ar 48000 -t 60 -f s16le -acodec pcm_s16le -ac 2 -i /dev/zero
-acodec libmp3lame -aq 4 output.mp3
How can I do it for opus with using ffmpeg or similar tool?
Using a recent build of ffmpeg, run
ffmpeg -f lavfi -i anullsrc -ac 2 -ar 48000 -t 30 -c:a libopus file.opus
You can add -vbr 0 -b:a 128k for a CBR encode.

ffmpeg audio and video sync error

./ffmpeg \
-f alsa -async 1 -ac 2 -i hw:2,0 \
-f video4linux2 -vsync 1 -s:v vga -i /dev/video0 \
-acodec aac -b:a 40k \
-r 25 -s:v vga -vcodec libx264 -strict -2 -crf 25 -preset fast -b:v 320K -pass 1 \
-f flv rtmp://192.168.2.105/live/testing
with the above command i able to stream with fps of 25 but their is no audio and video synchronization i.e., audio is faster than video,i am using ffmpeg 0.11.1 version on the pandaboard for an rtmp streaming ,help me out to solve this problem.
Thanks
Ameeth
Don't use -pass 1 if you're not actually doing two-pass encoding.
From the docs (emphasis added):
‘-pass[:stream_specifier] n (output,per-stream)’
Select the pass number (1 or 2). It is used to do two-pass video encoding. The statistics of the video are recorded in the first pass into a log file (see also the option -passlogfile), and in the second pass that log file is used to generate the video at the exact requested bitrate. On pass 1, you may just deactivate audio and set output to null, examples for Windows and Unix:
ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y NUL
ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y /dev/null
I was streaming to Twitch, and, funnily enough, removing the -r option made video sync with the audio. Now, you might want to have the framerate limited in some way; unfortunately, I have no solution for that, but it does allow to sync the video with the audio very well.

FFmpeg 0.11.1 rtmp mp4 streaming issues

I am using ffmpeg 0.11.1 onto a panda board for streaming. Linux linaro-developer 3.4.0-1-linaro-lt-omap #1~120625232503-Ubuntu Sx Ubuntu os.
I want to live stream a raw video from a video device or a cam to a rtmp server in an mp4 format, command used:
./ffmpeg \
-f alsa -async 1 -ac 2 -i hw:2,0 \
-f video4linux2 -i /dev/video0 \
-acodec aac -b:a 40k \
-r 50 -s 320x240 -vcodec libx264 -strict -2 -b:v 320K -pass 1 \
-f flv rtmp://...../mp4:demo101
By this command I am able to stream but fps varies between 8 to 7 6 and so on. Help me to figure it out.

Resources