ffmpeg - Unable to record screen and speaker audio at the same time - linux

I'm trying to record my screen and speaker audio with ffmpeg but I'm facing various problems.
This is the command I'm using:
ffmpeg -video_size 1920x1080 -framerate 60 -f x11grab -i :0.0 -f pulse -ac 2 -i alsa_output.pci-0000_2d_00.4.analog-stereo.monitor output.mp4
but the output video is really slow and out of sync with the audio.
Moreover, I'm getting these warnings:
[x11grab # 0x563635302480] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[aac # 0x563635319f40] Queue input is backward in time04.49 bitrate= 466.9kbits/s dup=268 drop=45 speed=1.84x
[mp4 # 0x5636353172c0] Non-monotonous DTS in output stream 0:1; previous: 217669, current: 210387; changing to 217670. This may result in incorrect timestamps in the output file.
When recording the screen with alsa source (microphone), the output video has no problems.
How can I solve?

Related

I need to convert an audio from .mp3 to .gsm

I need to convert from .mp3 to .gsm (preferably with ffmpeg).
I used it for several different formats but with this isn't as simple as it was with the others.
I don't know what parameters I'm missing.
I tried using ffmpeg with the following comand:
ffmpeg -i ".\example.mp3" ".\example.gsm"
But it shows me the following error:
Sample rate 8000Hz required for GSM, got 44100Hz Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height Conversion failed!
ffmpeg -ar 8000 -ac 1 -i ".\example.mp3" ".\example.gsm"
-ar sample rate
-ac audio channel

Ffmpeg segment doesn't show file size update in real time

I'm trying to run ffmpeg mp3 stream with segmentation for each hour. Everything is working perfectly, except for one thing: when i run the command, the file size doesn't grow in real-time as i need, it only grows in packages of 256k.
Is there a way to turn a "real-time mode"?
I'm using ubuntu 18.04 with ffmpeg 3.4.6
This is the code i'm trying to run on linux terminal:
ffmpeg -i http://radiocentova.conectastm.com:8363/stream -y -acodec libmp3lame -b:a 16k -ac 1 -ar 11025 -vn -strftime 1 -f segment -segment_time 3600 -flush_packets 1 #test_%Y%m%d%H%M%S+00.mp3
Recording with segment:
Recording without segment:
The flush packets option has to be directed to the child muxer (mp3 in this case), so
-segment_format_options flush_packets=1 instead of -flush_packets 1.

FFmpeg RTP_Mpegts over RTP protocol

I'm tryin to implement a client/server application based on FFmpeg. Unfortunately RTP_MPEGTS isn't documented in the official FFmpeg Documentation - Formats.
Anyway i found inspiration from this old thread.
Server Side
(1) Capture mic audio as input. (2)Encode it as pcm 8khz mono and (3) send it locally as RTP_MPEGTS format over rtp protocol.
ffmpeg -f avfoundation -i none:2 -ar 8000 -acodec pcm_u8 -ac 1 -f rtp_mpegts rtp://127.0.0.1:41954
This works, but on initiation it alerts "[mpegts # 0x7fda13024600] frame size not set"
Client Side (on the same machine)
(1) Receive rtp audio stream input (2) write it in a file or playback.
ffmpeg -i rtp://127.0.0.1:41954 -vcodec copy -y "output.wav"
I'm using -vcodec copy because i've already verified it in another rtp stream in which -acodec copy didn't work.
This stuck and while closing with Ctrl+C shortcut it prints:
Input #0, rtp, from 'rtp://127.0.0.1:41954':
Duration: N/A, start: 8.956122, bitrate: N/A
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0: Data: bin_data ([6][0][0][0] / 0x0006)
Output #0, wav, to 'output.wav':
Output file #0 does not contain any stream
I don't understand if the client didn't receive any stream, or it cannot write rtp packets into "output.wav" file. (Client or server problem?)
In the old thread is explained a workaround. On server could run 2 ffmpeg instance:
One produces "tmp.ts" file due to mpegts, and the other takes "tmp.ts" as input and streams it over rtp. Is it possibile?
Is there any better way to do implement this client/server with the lowest latency possible?
Thanks for any help provided.
I tested this with an .aac file and it worked:
Streaming:
(notice I use a multicast address.
But if you test the streaming and receiving on the same machine you might use your 127.0.0.1 as loopback address to the local host.)
ffmpeg -f lavfi -i testsrc \
-stream_loop -1 -re -i "music.aac" \
-map 0:v -map 1:a \
-ar 8000 -ac 1 \
-f rtp_mpegts "rtp://239.1.1.9:1234"
You need a video source for the rtp_mpegts muxer. I created one with lavfi.
I used -stream_loop to loop the .aac file forever for my test. You don't need this with a mic as input.
Capture stream:
ffmpeg -y -i "rtp://239.1.1.9:1234" -c:a pcm_u8 "captured_stream.wav"
I use the -c:a pcm_u8 while capturing on purpose, because using it in the Streaming did not work on the capturing side.
The output is a low quality 8bit, 8kHz mono audio file but that was what you've asked for.

FFMPEG merging audio and video to get resulting video

I need to merge audio and video using ffmpeg so that, it should result in a video with the same duration as of audio.
I have tried 2 commands for that requirement in my linux terminal. Both the commands work for a few of the input videos; but for some other input_videos, they produce output same as the input video, the audio doesn't get merged.
The commands, I have tried are -
ffmpeg -i wonders.mp4 -i Carefull.mp3 -c copy testvid.mp4
and
ffmpeg -i wonders.mp4 -i Carefull.mp3 -strict -2 testvid.mp4
and
ffmpeg -i video.mp4 -i audio.wav -c:v copy -c:a aac -strict
experimental output.mp4
and these are my input videos -
samplevid.mp4
https://vid.me/z44E
duration - 28 seconds
size - 1.1 MB
status - working
And
wonders.mp4
https://vid.me/gyyB
duration - 97 seconds
size - 96 MB
status - not working
I have observed that the large size (more than 2MB) of the input video is probably the issue.
But, still I want the fix.

Problems with point to point streaming using FFmpeg

I want to live stream video from webcam and sound from microphone from one computer to another but there is some problems.
When I use this command line:
ffmpeg.exe -f dshow -rtbufsize 500M -i video="Camera":audio="Microphone" -c:v mpeg4 -c:a mp2 -f mpegts udp://127.0.0.1:1234
FFmpeg console starts filling with yellow color messages and stream becomes unstable: http://s16.postimg.org/qglcgr345/Untitled.png
To solve this problem I have added new parameter to the command line to set the frame rate -r 25:
ffmpeg.exe -f dshow -rtbufsize 500M -r 25 -i video="Camera":audio="Microphone" -c:v mpeg4 -c:a mp2 -f mpegts udp://127.0.0.1:1234
After I added -r 25 problem with yellow color messages disappears but then appears another problem. When I fresh start FFmpeg with this command line video and sound looks synchronous but after one or two minutes appears ~25 seconds lag between video and sound, sound goes behind video. I have tried that with different protocols UDP, TCP, RTP but problems are the same. Please help me!
I found answer for my problem with "-r" and asynchronous audio and video. Who is interested answer is here: https://trac.ffmpeg.org/wiki/DirectShow (in paragraph "Specifying input framerate").

Resources