Tracking granular duration from an ffmpeg stream piped into Nodejs - node.js

The Situation
I have a video file that I want to process in nodejs, but I want to run it through ffmpeg first in order to normalize the encoding. As I get the data / for a given data event I would like to be able to know how "far" into the video the piped stream has progressed, at as close to frame-level granularity as possible.
0.1 second granularity would be fine.
The Code
I am using Nodejs to invoke ffmpeg to take the video file path and then output the data to stdout:
const ffmpegSettings = [
'-i', './path/to/video.mp4', // Ingest a file
'-f', 'mpegts', // Specify the wrapper
'-' // Output to stdout
]
const ffmpegProcess = spawn('ffmpeg', ffmpegSettings)
const readableStream = ffmpegProcess.stdout
readableStream.on('data', async (data) => {
readableStream.pause() // Pause the stream to make sure we process in order.
// Process the data.
// This is where I want to know the "timestamp" for the data.
readableStream.resume() // Restart the stream.
})
The Question
How can I efficiently and accurately keep track of how far into the video a given 'data' event represents? Keeping in mind that a given on event in this context might not even have enough data to represent even a full frame.
Temporary Edit
(I will delete this section once a conclusive answer is identified)
Since posting this question I've done some exploration using ffprobe.
start = () => {
this.ffmpegProcess = spawn('ffmpeg', this.getFfmpegSettings())
this.ffprobeProcess = spawn('ffprobe', this.getFfprobeSettings())
this.inputStream = this.getInputStream()
this.inputStream.on('data', (rawData) => {
this.inputStream.pause()
if(rawData) {
this.ffmpegProcess.stdin.write(rawData)
}
this.inputStream.resume()
})
this.ffmpegProcess.stdout.on('data', (mpegtsData) => {
this.ffmpegProcess.stdout.pause()
this.ffprobeProcess.stdin.write(mpegtsData)
console.log(this.currentPts)
this.ffmpegProcess.stdout.resume()
})
this.ffprobeProcess.stdout.on('data', (probeData) => {
this.ffprobeProcess.stdout.pause()
const lastLine = probeData.toString().trim().split('\n').slice(-1).pop()
const lastPacket = lastLine.split(',')
const pts = lastPacket[4]
console.log(`FFPROBE: ${pts}`)
this.currentPts = pts
this.ffprobeProcess.stdout.resume()
})
logger.info(`Starting ingestion from ${this.constructor.name}...`)
}
/**
* Returns an ffmpeg settings array for this ingestion engine.
*
* #return {String[]} A list of ffmpeg command line parameters.
*/
getFfmpegSettings = () => [
'-loglevel', 'info',
'-i', '-',
'-f', 'mpegts',
// '-vf', 'fps=fps=30,signalstats,metadata=print:key=lavfi.signalstats.YDIF:file=\'pipe\\:3\'',
'-',
]
/**
* Returns an ffprobe settings array for this ingestion engine.
*
* #return {String[]} A list of ffprobe command line parameters.
*/
getFfprobeSettings = () => [
'-f', 'mpegts',
'-i', '-',
'-print_format', 'csv',
'-show_packets',
]
This pipes the ffmpeg output into ffprobe and uses that to "estimate" where in the stream the processing has gotten. It starts off wildly inaccurate, but after about 5 seconds of processed-video ffprobe and ffmpeg are producing data at a similar pace.
This is a hack, but a step towards the granularity I want. It may be that I need an mpegts parser / chunker that can run on the ffmpeg output directly in NodeJS.
The output of the above is something along the lines of:
undefined (repeated around 100x as I assume ffprobe needs more data to start)
FFPROBE: 1.422456 // Note that this represents around 60 ffprobe messages sent at once.
FFPROBE: 1.933867
1.933867 // These lines represent mpegts data sent from ffmpeg, and the latest pts reported by ffprobe
FFPROBE: 2.388989
2.388989
FFPROBE: 2.728578
FFPROBE: 2.989811
FFPROBE: 3.146544
3.146544
FFPROBE: 3.433889
FFPROBE: 3.668989
FFPROBE: 3.802400
FFPROBE: 3.956333
FFPROBE: 4.069333
4.069333
FFPROBE: 4.426544
FFPROBE: 4.609400
FFPROBE: 4.870622
FFPROBE: 5.184089
FFPROBE: 5.337267
5.337267
FFPROBE: 5.915522
FFPROBE: 6.104700
FFPROBE: 6.333478
6.333478
FFPROBE: 6.571833
FFPROBE: 6.705300
6.705300
FFPROBE: 6.738667
6.738667
FFPROBE: 6.777567
FFPROBE: 6.772033
6.772033
FFPROBE: 6.805400
6.805400
FFPROBE: 6.829811
FFPROBE: 6.838767
6.838767
FFPROBE: 6.872133
6.872133
FFPROBE: 6.882056
FFPROBE: 6.905500
6.905500

Related

Reducing latency of Discord.js audio streaming

I'm implementing a Discord.js bot which streams my microphone to a voice channel using Prism Media. The problem is there's a delay of about 3 seconds from when the audio is recorded to when it is played.
The code below is how I'm currently initializing the audio player.
const { createAudioPlayer, createAudioResource, NoSubscriberBehavior, StreamType } = require('#discordjs/voice')
const prism = require('prism-media')
const player = createAudioPlayer({
behaviors: {
noSubscriber: NoSubscriberBehavior.Play,
maxMissedFrames: 250
}
})
player.play(
createAudioResource(
new prism.FFmpeg({
args: [
'-analyzeduration', '0',
'-loglevel', '0',
'-f', 'dshow',
'-i', 'audio=Microphone (Realtek(R) Audio)',
'-acodec', 'libopus',
'-f', 'opus',
'-ar', '48000',
'-ac', '2'
]
}),
{
inputType: StreamType.OggOpus
}
)
)
Since Prism Media uses FFmpeg to record audio, I started by verifying if FFmpeg by itself already shows this issue. With the command below I am able to reproduce the problem.
ffmpeg -f dshow -i "audio=Microphone (Realtek(R) Audio)" -f opus - | ffplay -
I've also tried various other flags to no effect. Such as:
-audio_buffer_size 50
-fflags nobuffer
-flags low_delay
By encoding in other file formats, I'm able to reduce the latency of the raw command, but Discord.js expects the format to be Opus.
Also, I've chosen Prism Media because it is used in the examples I found, but I'm open to changing to another audio library, as long as it is compatible with Electron.
How can I reduce latency to less than a second?

How can i increase my fps output ? (ffmpeg, nodejs)

I'm having quite the trouble understanding how fps output works.
I have a video workflow through node and ffmpeg that transform picture into scrolling videos, here is the command :
const ffmpeg = spawn('ffmpeg', ['-f', 'lavfi', '-i', 'color=s=1280x720', '-loop', '1', '-i', `${path}/${video.name}`, '-filter_complex', `[1:v]scale=1280:-2,format=yuv420p,fps=fps=60[fg]; [0:v][fg]overlay=y=-\'t*h*0.02\'[v]`, '-map', '[v]', '-t', `${clipDuration}`, `./${path}/${video.name}-wip.mp4`])
ffmpeg.stderr.on('data', (data) => {
console.log(`${data}`);
});
ffmpeg.on('close', (code) => {
const ffmpeg2 = spawn('ffmpeg', ['-i', `./${path}/${video.name}-wip.mp4`, '-vf', `tpad=stop_mode=clone:stop_duration=3,fade=type=in:duration=1,fade=type=out:duration=1:start_time=${clipDuration + 2}`, `./${path}/${video.name}.mp4`])
ffmpeg2.stderr.on('data', (data) => {
console.log(`${data}`);
});
ffmpeg2.on('close', (code) => {
resolve();
});
})
First ffmpeg command create a scrolling video from picture,
second ffmpeg command add a fade out transition and a pause to this video.
FPS output for this is 25. How can i increase it to 60 so that scrolling isn't stuttering anymore ?
Thanks for your time.
try this
const ffmpeg2 = spawn('ffmpeg', ['-i', `./${path}/${video.name}-wip.mp4`, '-vf', `framerate=fps=60,tpad=stop_mode=clone:stop_duration=3,fade=type=in:duration=1,fade=type=out:duration=1:start_time=${clipDuration + 2}`, `./${path}/${video.name}.mp4`])
Note this from https://superuser.com/questions/1265642/ffmpeg-slideshow-with-crossfade:
ffmpeg -i temp.mp4 -vf "framerate=fps=60" -codec:v mpeg4 out.mp4
In commmand , Use it
ffmpeg -i main.mp4 -vf "framerate=fps=60" -codec:v mpeg4 out.mp4

Pass multiple input files to ffmpeg using a single stream in Node

I'm trying to use ffmpeg to merge multiple video files. Every file has the same encoding, and they just need to be stitched together. The problem I'm having is that I'd like to do this using streams, but ffmpeg only supports one input stream per command.
Since the files have the same encoding, I thought I could merge them into a single stream, and feed it as an input to ffmpeg.
const CombinedStream = require("combined-stream")
const ffmpeg = require("fluent-ffmpeg")
const AWS = require("aws-sdk")
const s3 = new AWS.S3()
const merge = ({ videos }) => {
const combinedStream = CombinedStream.create();
videos //I take my videos from S3 and merge them
.map((video => {
return s3
.getObject({
Bucket: "myAWSBucketName",
Key: video
})
.createReadStream()
}))
.forEach(stream => {
combinedStream.append(stream)
})
ffmpeg()
.input(combinedStream)
.save("/tmp/file.mp4")
}
merge({ videos: ["video1.mp4", "video2.mp4"]})
I was hoping ffmpeg could read the files from the single stream and output them together, but I got this error instead:
Error: ffmpeg exited with code 1: pipe:0: Invalid data found when processing input
Cannot determine format of input stream 0:0 after EOF
Error marking filters as finished
Conversion failed!
Can anyone help me?

FFmpeg Stream RTSP input and save to file at the same time using nodejs

I am using node-rtsp-stream module to stream RTSP to web with nodejs.
I am streaming RTSP source with ffmpeg, for example RTSP SOURCE - EXAMPLE
I know that I can save one or many inputs to many outputs but I dont know if there is option to stream the input and save it to file at the same time without executing two process of ffmpeg.
With the following example I am able to stream the RTSP source
ffmpeg -i rtsp-url -rtsp_transport tcp -f mpeg1video -b:v 800k -r 30
On the module is look like that:
this.stream = child_process.spawn("ffmpeg", [ "-i", this.url, "-rtsp_transport", "tcp",'-f', 'mpeg1video', '-b:v', '800k', '-r', '30', '-'], {
detached: false
});
ff =child_process.spawn("ffmpeg", [ "-i", this.url, '-b:v', '800k', '-r', '30', '1.mp4'], {
detached: false
});
this.inputStreamStarted = true;
this.stream.stdout.on('data', function(data) {
return self.emit('mpeg1data', data);
});
this.stream.stderr.on('data', function(data) {
return self.emit('ffmpegError', data);
});
As you can see I am using two process of ffmpeg to do what I want but
If anyone faced with this issue and solve it with one command ( process ), I would like to get some suggestions.
How to stream RTSP source and save it to file at the same time.
for more information about the module I use:
node-rtsp-stream
try the code: (it will read RTSP and save to a jpg file (overwrite it every 3 seconds))
var fs = require('fs');
var spawn = require('child_process').spawn;
var rtspURI = 'rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov';
var fps = 1/3;
//avconv -i rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov \
// -r 1/3 -an -y -update 1 test.jpg
var ffmpeg = spawn('avconv', ['-i',rtspURI,'-r',fps,'-an','-y','-update','1','test.jpg']);
// var ffmpeg = spawn('avconv',
// ['-i','rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov',
// '-r','1/3','-an','-y','-update','1','test.jpg']);
ffmpeg.stdout.on('data', function (data) {
console.log('stdout: ' + data);
});
ffmpeg.stderr.on('data', function (data) {
console.log('stderr: ' + data);
});
ffmpeg.on('close', function (code) {
console.log('child process exited with code ' + code);
});

fluent-ffmpeg thumbnail creation error

i try to create a video thumbnail with fluent-ffmpeg here is my code
var ffmpeg = require('fluent-ffmpeg');
exports.thumbnail = function(){
var proc = new ffmpeg({ source: 'Video/express2.mp4',nolog: true })
.withSize('150x100')
.takeScreenshots({ count: 1, timemarks: [ '00:00:02.000' ] }, 'Video/', function(err, filenames) {
console.log(filenames);
console.log('screenshots were saved');
});
}
but i keep getting this error
"mate data contains no duration, aborting screenshot creation"
any idea why,
by the way am on windows, and i put the ffmpeg folder in c/ffmpeg ,and i added the ffmpeg/bin in to my environment varable, i dont know if fluent-ffmpeg need to know the path of ffmpeg,but i can successfully create a thumbnail with the code below
exec("C:/ffmpeg/bin/ffmpeg -i Video/" + Name + " -ss 00:01:00.00 -r 1 -an -vframes 1 -s 300x200 -f mjpeg Video/" + Name + ".jpg")
please help me!!!
I think the issue can be caused by the .withSize('...') method call.
The doc says:
It doesn't interract well with filters. In particular, don't use the size() method to resize thumbnails, use the size option instead.
And the size() method is an alias of withSize().
Also - but this is not the problem in your case - you don't need to set either the count and the timemarks at the same time. The doc says:
count is ignored when timemarks or timestamps is specified.
Then you probably could solve with:
const ffmpeg = require('fluent-ffmpeg');
exports.thumbnail = function(){
const proc = new ffmpeg({ source: 'Video/express2.mp4',nolog: true })
.takeScreenshots({ timemarks: [ '00:00:02.000' ], size: '150x100' }, 'Video/', function(err, filenames) {
console.log(filenames);
console.log('screenshots were saved');
});
}
Have a look at the doc:
https://github.com/fluent-ffmpeg/node-fluent-ffmpeg#screenshotsoptions-dirname-generate-thumbnails
FFmpeg needs to know the duration of a video file, while most videos have this information in the file header some file don't, mostly raw videos like a raw H.264 stream.
A simple solution could be to remux the video prior to take the snapshot, the FFmpeg 0.5 command for this task it's quite simple:
ffmpeg -i input.m4v -acodec copy -vcodec copy output.m4v
This command tells FFmpeg to read the "input.m4v" file, to use the same audio encoder and video encoder (no encoding at all) for the output, and to output the data into the file output.m4v.
FFmpeg automatically adds all extra metadata/header information needed to take the snapshot later.
Try this code to create thumbnails from Video
// You have to Install Below packages First
var ffmpegPath = require('#ffmpeg-installer/ffmpeg').path;
var ffprobePath = require('#ffprobe-installer/ffprobe').path;
var ffmpeg = require('fluent-ffmpeg');
ffmpeg.setFfmpegPath(ffmpegPath);
ffmpeg.setFfprobePath(ffprobePath);
var proc = ffmpeg(sourceFilePath)
.on('filenames', function(filenames) {
console.log('screenshots are ' + filenames.join(', '));
})
.on('end', function() {
console.log('screenshots were saved');
})
.on('error', function(err) {
console.log('an error happened: ' + err.message);
})
// take 1 screenshots at predefined timemarks and size
.takeScreenshots({ count: 1, timemarks: [ '00:00:01.000' ], size: '200x200' }, "Video/");

Resources