FFMPEG action on events - linux

iam trying to make action on events in FFMPEG.
For example: ffmpeg -i http://domain/index.m3u8 -c copy -f segment -strftime 1 -segment_time 10 %Y-%m-%d-%H-%M-%S.mp4
FFMPEG take live stream, cut by slices and create files. I want to run a script do_with_file.sh after every slice created, without ffmpeg pausing.
Is there any option in ffmpeg to make it?
Ofcource, i can take stdout from ffmpeg and looking for "segment" text:
ffmpeg ....mp4 | grep 'segment #' | do_with_file.sh
First of all, info line about "segment" showed in stdout, before file was created.
It is not working, if i want run ffmpeg in background.
And in my mind, it is not geek way :)
P.S. English is not my native language, sorry for mistakes.

You can ask ffmpeg to tell you when a segment is finished recording:
-loglevel verbose
With this option you'll get the event you're looking for:
[segment # 0x0f0f0f0f0f0f] segment:'filename.ext' count:N ended
But, if you're prefer a "geek" way, you may try inotifywait:
while segment=$(inotifywait --quiet --event close_write --format %w%f path/to/dir); do
do_with_file $segment
done

Related

How to find if the videos have sound in it?

I've a some hundreds of video files in a folder structure. All of them have video and audio streams, but some of them don't have any sound, despite having an audio stream. Is there a way to find out those files without having to resort to opening each file individually.
Most ways I know only check if there is an audio stream.
Thanks.
You can run the following in batch mode:
ffmpeg -hide_banner -i file.mp4 -af volumedetect -vn -f null - 2>&1 | grep mean_volume
The output for each file will be of the form
[Parsed_volumedetect_0 # 0000000002b1e800] mean_volume: -17.2 dB
Perfect digital silence will have a value of -91 dB, but anything below, say, -40 dB is probably just tape noise. Test and verify a few inputs manually and set a value.

No data written to stdin or stderr from ffmpeg

I have a dummy client that is suppose to simulate a video recorder, on this client i want to simulate a video stream; I have gotten so far that i can create a video from bitmap images that i create in code.
The dummy client is a nodejs application running on an Raspberry Pi 3 with the latest version of raspian lite.
In order to use the video I have created, I need to get ffmpeg to dump the video to pipe:1. The problem is that I need the -f rawvideo as a input parameter, else ffmpeg can't understand my video, but when i have that parameter set ffmpeg refuses to write anything to stdio
ffmpeg is running with these parameters
ffmpeg -r 15 -f rawvideo -s 3840x2160 -pixel_format rgba -i pipe:0 -r 15 -vcodec h264 pipe:1
Can anybody help with a solution to my problem?
--Edit
Maybe i sould explain a bit more.
The system i am creating is to be set up in a way, where instead of my stream server ask the video recorder for a video stream, it will be the recorder that tells the server that there is a stream.
I have have slowed my problem on my own. (-:
i now have 2 solutions.
Is to change my -f rawvideo to -f data that works for me anyways.
I can encode my bitmaps as jpeg in code and pipe my jpeg images to stdin. This also requires me to change the ffmpeg parameters to -r 4 -f mjpeg -i pipe:0 -r 4 -vcodec copy -f mjpeg pipe:1 and is by far the slowest thing i have ever done. and i can't use a 4k input
thanks #Mulvya for trying to help.
#eFox Thanks for editing my stupid spelling and grammar mistakes

Set up basic Batch or Node.JS prompts for FFMPEG?

I have some game clips from Nvidia shadow play that I like to casually shorten and / or turn them into webms or keep them as mp4s. I use the same ffmpeg line for them. I do slightly change the line because of the input file, start time, and output file.
How could I set up something like a batch file (I was thinking maybe node as well) where it just asks for the input file, start time, and output file?
The current ffmpeg command line I use is like this:
ffmpeg -i desktop.mp4 -ss 00:01:50 -b 900000 -vf scale=640:trunc(ow/a/2)*2 output.webm
You can prompt for user input using the following pattern:
SET /P FILENAME=Enter Filename:
ECHO USER ENTERED %FILENAME%
So with your code you'd setup your 3 variables then use:
ffmpeg -i "%INFILE%" -ss %STARTTIME% -b 900000 -vf scale=640:trunc(ow/a/2)*2 "%OUTFILE%"

Passing processed Video from OpenCV to FFmpeg for HLS streaming (Raspberry PI)

Hi I a have a question I have openCV and ffmpeg on the Raspberry Pi and I am trying to stream live video from the raspberry pi. At the moment I have the output output of openCV saving as a .avi file and I have a command for ffmpeg
ffmpeg -i out.avi -hls_segment_filename '%03d.ts' stream.m3u8
This Command take the output creates the playlist(.m3u8) and the segments(.ts).
At present I have openCV programmed in C++ (this can not change) I have an executable programmed from this and I have both the executable C++ and the above ffmpeg in a Bash Script.
#!/bin/bash
while true; do
./OpenCV
ffmpeg -i out.avi -hls_segment_filename '%03d.ts' stream.m3u8
done
This does allow me to stream the processed openCV video my issue is as the Bash script is in a while loop it keeps resetting the playlist and the .ts files, so i have to constantly press play on the client connection.
Is there anyway around this?
I tried including a variable that would increment every loop but if i replace '%03d' with this i get an error.
If you insist on using your program (OpenCV) and ffmpeg in a loop then you can specify the initial hls sequence number for stream.m3u8 using start_number. Something like this:
... as before ...
ffmpeg -i out.avi -hls_segment_filename '%03d.ts' --start_number $I stream.m3u8
where I is a variable that you have to increment each time the loop runs.
But this approach is very fragile and will probably result in an incorrect stream because it assumes that ffmpeg will produce only a single segment but in reality it will probably produce multiple segments.
A much better approach is to run OpenCV and ffmpeg in parallel and make them talk to each other. By doing so there will be no need to write to a temporary file out.avi and run OpenCV and ffmpeg in sequences and keep the media sequences synchronized.
I think you can hack it like this. Note that you may need to change OpenCV so that it writes constantly to out.avi and does not return after a while:
./OpenCV &
tail -n +0 -f out.avi | ffmpeg -i pipe:0 -hls_segment_filename '%03d.ts' stream.m3u8
A better approach is change your program to write to stdout or to a named pipe and run it like so:
./OpenCV | ffmpeg -i pipe:0 -hls_segment_filename '%03d.ts' stream.m3u8

restarting ffmpeg upon stop/disconnection

I'm recording a long audio m3u8 stream with ffmpeg (with -t to limit the time).
the problem is the stream resets its connection quite often.
how do I make ffmpeg restart upon hangs?
I was thinking of running of such a hack:
timeout <time> while [[ 1 ]]; do ffmpeg -i <mystream> <outfile.mp3>
but it would override the same file
any suggestions?
You should be able to concat mp3. Tell ffmpeg to write to stdout and redirect it to a file.
timeout 60 while [[ 1 ]]; do ffmpeg -i mystream - >> outfile.mp3
as it usually happens, a more careful reading of the man page revealed the solution.
I also learned that now it's better to use avconv over ffmpeg for its better support of hls.
once I marked the stream as an m3u8 one (actually it's called hls)
ffmpeg hls+http://<stream url> -t <timeout> <output file.mp3>
happy converting everyone

Resources