I'm using the following command to record a RTSP stream to a file, for a given amount of time (in this example 30 seconds):
ffmpeg -rtsp_transport tcp -i "rtsp://streamurl:554/ch0" -t 30 output.mp4
Sometimes the source stream is closed-interrupted-finished-shutdown-whateverNameYouWant before the desired timeout (in this example 30 seconds), and the ffmpeg process is finished (with no errors).
I want to know how can I programmatically check if the above ffmpeg command was finished because of the desired timeout (-t <duration> flag), or because the input stream was interrupted.
In other words, I want to know when a problem occurred with the input stream, given that ffmpeg shows no errors when the input stream is closed/interrupted.
Related
I need to count the number of frames in a video captured by a camera on a per-second basis. I haven't found a solution using ffmpeg or ffprobe (or something else) to output the number of frames per second (maintaining a constant frame rate is not guaranteed because of the capture mechanism and needs to be verified).
So far, I've needed to run ffmpeg and ffprobe separately. First, I run ffmpeg to trim the video:
ffmpeg -ss 00:00:00 -to <desired time in seconds> -i <in_video> -c copy <out_video>
Then, I run ffprobe to count the number of frames in the snippet:
ffprobe -v error -select_streams v:0 -count_frames -show_entries stream=nb_read_frames -print_format csv <out_video>
Is there one command to output the number of frames for each second in the video?
Run
ffmpeg -report -i <in_video> -an -vf select='if(eq(n,0),1,floor(t)-floor(prev_selected_t))' -f null -
In the generated report, search for select:1.000000
that will get you lines of the form
[Parsed_select_0 # 000001f413152540] n:270.000000 pts:138240.000000 t:9.000000 key:0 interlace_type:P pict_type:P scene:nan -> select:1.000000 select_out:0
The t is the timestamp and the n is the frame index. Check the frame index for each successive t. The difference is the frame count for that 1 second interval.
I am trying to crossfade a silent input with a music to delay the moment when the music starts to play.
I built the command using fluent-ffmpeg so I could choose the duration of the silent input through my program. The duration of the crossfade is calculated according to the duration of the 2 inputs, and equals 0 if one of them is too short.
Below is an example of the resulting command:
ffmpeg -f lavfi -i anullsrc=r=44100 -i music.mp3 -y -filter_complex [0]atrim=duration=0.28[atrim_0];[atrim_0][1]acrossfade=d=0:c1=tri:c2=tri[final] -map [final] output.mp3
However, this command creates an empty output file when the duration of the silent input is inferior to 1 second, regardless of which music input is next. Using the same command with a trim duration > 1 second creates a valid output with the silence and the music.
I have tried to look through the FFmpeg debug report but couldn't really see what was wrong.
Below is an excerpt of the debug log report:
Input file #0 (anullsrc=r=44100):
Input stream #0:0 (audio): 14 packets read (28672 bytes); 14 frames decoded (14336 samples);
Total: 14 packets (28672 bytes) demuxed
Input file #1 (music.mp3):
Input stream #1:0 (audio): 504 packets read (210651 bytes); 504 frames decoded (578372 samples);
Total: 504 packets (210651 bytes) demuxed
Output file #0 (output.mp3):
Output stream #0:0 (audio): 0 frames encoded (0 samples); 0 packets muxed (0 bytes);
Total: 0 packets (0 bytes) muxed
Any idea what could cause this?
PS: I am using FFmpeg 4.4, and the same command with FFmpeg 4.2 lead to a segmentation fault. Don't know if this can be of any help
acrossfade can accept crossfade duration through two exclusive options: nb_samples (default: 44100) and duration (default: 0). When the latter isn't set, the former is used. So, in your command, acrossfade uses a crossfade duration of 44100 samples or 1 second. The filter needs both inputs to be at least as long as the crossfade duration.
However, in your case, it seems you just want to do two things: fade in the audio and maybe delay it. Just use afade for that.
ffmpeg -i music.mp3 -y -af afade=d=1:curve=tri,adelay=0.28s:all=1 output.mp3
This will fade-in the music over one second and delay the start by 0.28s.
I want to capture the RTSP stream from some IP cameras, and after looking around I found 2 great tools to do this: avconv and openRTSP
openRTSP -u user password rtsp://10.48.34.125/axis-media/media.amp
avconv -i "rtsp://user:password#10.48.34.125/axis-media/media.amp" -vcodec copy -f mp4 10.48.34.125.mp4
but for some voodoo reason when I need to use URLs without an specific extension, such as:
rtsp://user:password#10.48.34.46/
avconv returns 401 Unauthorized
so I'm stuck with openRTSP at the moment...
The thing is, unlike avconv, openRTSP outputs a raw file which is encoded to 25fps, which made some of my videos look like they where in fast-forward. I found a (cpu expensive) way to re-encode the file to a closer frame rate to what I need:
avconv -r 7 -i video-H264-1 -r 24 -f mp4 10.48.34.28.mp4
(in this example I'm forcing the frame rate of the raw file to be 7, and the frame rate of the output file to be 24. I tried using openRTSP build-in flags, but the output file still had a frame rate of 25: openRTSP -f 7 -u user password rtsp://10.48.34.145/mpeg4/media.3gp)
Sadly the video looks odd at certain points, and that's because the original stream sometimes has a variable frame rate (for example at night).
My question is, is there some way to deactive this default encondig to 25fps?
And why 25? I mean, isn't the norm 24?
try:
avconv -rtsp_transport tcp -i rtsp://server -an -vcodec copy -f mp4 10.48.34.28.mp4
if you want to change original video rate to 24 you must transcode it:
avconv -rtsp_transport tcp -i rtsp://server -an -vcodec libx264 -r 24 -f mp4 10.48.34.28.mp4
I'm creating a Node JS application which takes an m-jpeg image stream, and constructs an MPEG-1 stream on the fly. I'm leveraging fluent-ffmpeg at the moment. The steam is intended to be continuous and long-lived. The images flow in freely at a constant framerate.
Unfortunately, using image2pipe and input -vcodec mjpeg, it seems like ffmpeg needs to wait until all the images are ready before processing begins.
Is there any way to have ffmpeg pipe in and pipe out immediately, as images arrive?
Here is my current Node JS code:
var proc = new ffmpeg({ source: 'http://localhost:8082/', logger: winston, timeout: 0 })
.fromFormat('image2pipe')
.addInputOption('-vcodec', 'mjpeg')
.toFormat('mpeg1video')
.withVideoBitrate('800k')
.withFps(24)
.writeToStream(outStream);
And the ffmpeg call it generates:
ffmpeg -f image2pipe -vcodec mjpeg -i - -f mpeg1video -b:v 800k -r 24 -y http://127.0.0.1:8082/
To get a live stream, try switching image2pipe for rawvideo:
.fromFormat('rawvideo')
.addInputOption('-pixel_format', 'argb')
.addInputOption('-video_size', STREAM_WIDTH + 'x' + STREAM_HEIGHT)
This will encode the video at very low latency, instantly.
You can remove .fromFormat('image2pipe') and .addInputOption('-vcodec', 'mjpeg').
This is definitely a strange question but I'm looking for a way to split an mp3 mix of 60 minutes into 60 separate 1 minute long wav files to use with an audio fingerprinting API like Echonest.
Is this possible in a single ffmpeg command or would I have to run multiple iterations of ffmpeg with a the following values:
-ss is the startpoint in seconds.
-t is the duration in seconds.
You can use the segment muxer in ffmpeg:
ffmpeg -i input.mp3 -codec copy -map 0 -f segment -segment_time 60 output%03d.mp3
For a 4 minute input this results in:
$ ls -m1 output*.mp3
output000.mp3
output001.mp3
output002.mp3
output003.mp3
Since -codec copy enables stream copy mode re-encoding will be avoided. See the segment documentation for more information and examples.