openRTSP default 25fps encoding (not 24) - linux

I want to capture the RTSP stream from some IP cameras, and after looking around I found 2 great tools to do this: avconv and openRTSP
openRTSP -u user password rtsp://10.48.34.125/axis-media/media.amp
avconv -i "rtsp://user:password#10.48.34.125/axis-media/media.amp" -vcodec copy -f mp4 10.48.34.125.mp4
but for some voodoo reason when I need to use URLs without an specific extension, such as:
rtsp://user:password#10.48.34.46/
avconv returns 401 Unauthorized
so I'm stuck with openRTSP at the moment...
The thing is, unlike avconv, openRTSP outputs a raw file which is encoded to 25fps, which made some of my videos look like they where in fast-forward. I found a (cpu expensive) way to re-encode the file to a closer frame rate to what I need:
avconv -r 7 -i video-H264-1 -r 24 -f mp4 10.48.34.28.mp4
(in this example I'm forcing the frame rate of the raw file to be 7, and the frame rate of the output file to be 24. I tried using openRTSP build-in flags, but the output file still had a frame rate of 25: openRTSP -f 7 -u user password rtsp://10.48.34.145/mpeg4/media.3gp)
Sadly the video looks odd at certain points, and that's because the original stream sometimes has a variable frame rate (for example at night).
My question is, is there some way to deactive this default encondig to 25fps?
And why 25? I mean, isn't the norm 24?

try:
avconv -rtsp_transport tcp -i rtsp://server -an -vcodec copy -f mp4 10.48.34.28.mp4
if you want to change original video rate to 24 you must transcode it:
avconv -rtsp_transport tcp -i rtsp://server -an -vcodec libx264 -r 24 -f mp4 10.48.34.28.mp4

Related

FFmpeg stream dynamic png

I would like to know if its possible to stream a png or any kind of image using ffmpeg. I would like to generate the image contiously using nodejs that updates every 10 seconds. I would like to display game stats with this in a corner and mix it with some background music or pre recorded commentary on it. Additionaly i would like to mix a video and the image should act like an overlay.
I am also not sure if using a transparent png image its possible to do
I couldn't get my head around doing the mixing with ffmpeg and its looks very complicated so i would like to get some help on it.
I have video files stored in a folder that i would like to continously stream and mix different music and an image on it. I would like to have it all continously working without stopping the stream.
Is it possible with ffmpeg cli on linux or i cant avoid using a desktop windows pc for such thing?
Well after digging through the documentation and asking for help on irc i came up with the following command:
First i store the list of tracks in a txt file such as:
playlist.txt
file 'song1.mp3'
file 'song2.mp3'
file 'song3.mp3'
Then i want to concat the tracks so i use -concat and specify the input as a txt file.
The second thing is using a static image as an input that i can manually update.
ffmpeg -re -y -f concat -safe 0 -i playlist.txt -framerate 1 -loop 1 -f image2 \
-vcodec libx264 -pix_fmt yuv420p -preset ultrafast -r 12 -g 24 -b:v 4500k \
-acodec libmp3lame -ar 44100 -threads 6 -qscale 3 -b:a 128k -bufsize 512k \
-f flv "rtmp://"
The rest is specificing the output format and other settings for streaming.
Thats what i came up with so far, not sure if theres any better way of doing this but right now it is sufficient enough for my needs.

FFMPEG encode audio and forced subtitles at same time?

I'm using latest static build of ffmpeg windows.
My input file (.mkv) is:
[video] - 1080, V_MPEG4/ISO/AVC, 14.6 Mbps, ID#0
[audio] - DTS 5.1, 1510 Kbps, ID#1
[subtitles] - S_TEXT/ASS Lossless English, ID#14
My problem is this: I convert the audio, so that my target player, a XB1 console (media support faq), is able to play audio/video. However sometimes its rather difficult to hear or parts may be in foreign language, so I want to force the english subtitles into the mix at the same time I convert the audio.
Currently for the audio, I use the following command
ffmpeg -i input.mkv -codec copy -acodec ac3 output.mkv
Can I somehow tie in the forced subtitles (onto the video) in order to save an extra process of taking the output.mkv and trying to force subtitles on?
Edit: I've tried using the following command to extract subtitles to be able to edit them
ffmpeg -i Movie.mkv -map 0:s:14 subs.srt
However i get the error: Stream map '0:s:14' matches no streams
Edit2: attempted to extract subtitles and succeeded with
ffmpeg -i input.mkv -map 0:14 -c copy subtitles.ass
but still looking to force the subtitles, nonetheless!
Also - a little bonus to this question - can I somehow extract the .ass file and edit it to only produce subtitles for foreign parts - so english audio doesn't have subtitles during the movie but foreign audio does have subtitles?
Cheers
Edit3:
When I try to use both of the commands at once (my earlier mentioned audio converter & one from the ffmpeg wiki)
ffmpeg -i input.mkv -codec copy -acodec ac3 -vf "ass=subs.ass" output.mkv
I get the following error from ffmpeg,
Filtergraph 'ass=subs.ass' was defined for video output stream 0:0 but codec copy was selected.
Filtering and streamcopy cannot be used together.
Since your media player does not support subtitles, the text has to be burnt onto the video image. For that, use
ffmpeg -i input.mkv -vf "ass=subs.ass" -c:v libx264 -crf 20 -c:a ac3 output.mkv
This will re-encode the video, since text is being added. The CRF value controls the video quality. Lower values produce better quality but larger files. 18 to 28 is a decent range to try.

FFMpeg - Segment audio to chunks

I have tried this example in order to segment a given video file using ffmpeg into an m3u8 file and smaller chunks (.ts files). This actually worked great. Is it possible to do practically the same thing with audio input?
This was my most promising approach so far (capturing live audio on Windows OS):
ffmpeg -f dshow -i audio="<name of input device>" -acodec libmp3lame -ab 64000 | segmenter - 10 stream stream.m3u8 http://<IP_OF_SERVER>/stream/stream/ 5 1
But this returns this error:
At least one output file must be specified.
Could not open input file, make sure it is an mpegts file: -1
I really would not know how to convert the live audio stream to an mpegts file.
Could anyone please give me a hint?
Thanks a lot

Problems with point to point streaming using FFmpeg

I want to live stream video from webcam and sound from microphone from one computer to another but there is some problems.
When I use this command line:
ffmpeg.exe -f dshow -rtbufsize 500M -i video="Camera":audio="Microphone" -c:v mpeg4 -c:a mp2 -f mpegts udp://127.0.0.1:1234
FFmpeg console starts filling with yellow color messages and stream becomes unstable: http://s16.postimg.org/qglcgr345/Untitled.png
To solve this problem I have added new parameter to the command line to set the frame rate -r 25:
ffmpeg.exe -f dshow -rtbufsize 500M -r 25 -i video="Camera":audio="Microphone" -c:v mpeg4 -c:a mp2 -f mpegts udp://127.0.0.1:1234
After I added -r 25 problem with yellow color messages disappears but then appears another problem. When I fresh start FFmpeg with this command line video and sound looks synchronous but after one or two minutes appears ~25 seconds lag between video and sound, sound goes behind video. I have tried that with different protocols UDP, TCP, RTP but problems are the same. Please help me!
I found answer for my problem with "-r" and asynchronous audio and video. Who is interested answer is here: https://trac.ffmpeg.org/wiki/DirectShow (in paragraph "Specifying input framerate").

Add audio (with an offset) to video with FFMPEG

I have a 10 minute video and a 50 minute audio mp3.
The video starts at 500 seconds into the audio.
Using FFMPEG, how can I add the the audio to the video but specify a 500 seconds audio offset (So that they sync up)?
EDIT:
Down the bottom of this page it suggests how to specify an offset.
$ ffmpeg -i video_source -itsoffet delay -i audio_source -map 0:x -map 1:y .......
However, when I apply this, it still starts the audio from the start.
We are 8 years later, and the -itsoffset does work.
Exactly as in your linked page:
ffmpeg -i input_1 -itsoffset 00:00:03 -i input_2
Note that you place the -itsoffset switch before the input you want to delay, in this case input_2 will be delayed.
So in your case that the video starts later, you would add -itsoffset 00:08:20 before the video input.
I couldn't get audio to offset properly either, and some searching suggests that -itsoffset is currently broken.
You could try and get/compile an old version of ffmpeg before it broke (which doesn't sound like much fun).
Alternately, you could pad your audio with the necessary silence using something like sox and then combine:
sox -null silence.mp3 trim 0 500 # use -r to adjust sample-rate if necessary
sox silence.mp3 input.mp3 padded_input.mp3
ffmpeg -i in.avi -i padded_input.mp3 out.avi

Resources