Cut multiple parts from a video and merge them together - node.js

I am trying to cut specific parts out from a video and then merge all those parts into a single video file using nodejs and ffmpeg.
Here's my code and currently I can cut only one part out of the video from .setStartTime to .setDuration and that part is being saved.
var ffmpeg = require('fluent-ffmpeg');
var command = ffmpeg()
.input('./videos/placeholder-video.mp4')
.setStartTime('00:00:03')
.setDuration('02')
.output('./videos/test.mp4')
.on('start', function(commandLine) {
console.log('Started: ' + commandLine);
})
.on('end', function(err) {
if(!err)
{
console.log('conversion Done');
}
})
.on('error', function(err){
console.log('error: ', +err);
}).run();
How do I cut out multiple parts out from the video and merge them in a single video file. I know about .mergeToFile method but how do I use it after cutting different parts from my video.
I tried using .setStartTime and .setDuration twice like below but the first one's are being ignored.
.input('./videos/placeholder-video.mp4')
.setStartTime('00:00:03')
.setDuration('02')
.setStartTime('00:00:15')
.setDuration('05')
.output('./videos/test.mp4')

PLEASE READ UPDATE
Could not test it yet, but try following:
Set the input video two times, like this:
.input('./videos/placeholder-video.mp4')
.setStartTime('00:00:03')
.setDuration('02')
.input('./videos/placeholder-video.mp4')
.setStartTime('00:00:15')
.setDuration('05')
Then you can merge the multiple inputs with .mergeToFile.
UPDATE
My answer cannot work due to restriction of .setDuration. It is an output option, so it defines how long transcoding to the output file is done: https://github.com/fluent-ffmpeg/node-fluent-ffmpeg
It is not used to define the length/duration of the input.
Another option would be .loop, but apparently it is not supported for this purpose: https://video.stackexchange.com/questions/12905/repeat-loop-input-video-with-ffmpeg/12906#12906
If you really want to use nodejs, you need multiple commands
cut the input video to temporary files, one for each cut
merge the temporary files to one output file
Something like this:
var command1 = ffmpeg()
.input('./small.mp4')
.seekInput('00:00:02')
.withDuration(1)
.output('./temp1.mp4')
.run();
var command2 = ffmpeg()
.input('./small.mp4')
.seekInput('00:00:01')
.withDuration(3)
.output('./temp2.mp4')
.run();
var command2 = ffmpeg()
.input('./temp1.mp4')
.input('./temp2.mp4')
.mergeToFile('./merge.mp4', './temp');
Big problem: your nodejs script will not start if temporary files are not present.
So it would be much more easier to use a bash or batch script.

This is how I would solve it:
(async function() {
const ffmpeg = require('fluent-ffmpeg')
function ffmpegPromise(fn) {
return new Promise((res, rej) => {
fn.on('error', (e) => rej(e))
.on('end', () => res('finished!'))
.run()
})
}
const file1 = ffmpeg().input('./videos/placeholder-video.mp4')
.setStartTime('00:00:03')
.setDuration(2)
.output('./videos/test1.mp4')
const file2 = ffmpeg().input('./videos/placeholder-video.mp4')
.setStartTime('00:00:10')
.setDuration(2)
.output('./videos/test2.mp4')
const files = await Promise.all([ffmpegPromise(file1), ffmpegPromise(file2)])
console.log(files) // ['finished!', 'finished!']
ffmpeg()
.mergeAdd('./videos/test1.mp4')
.mergeAdd('./videos/test2.mp4')
.mergeToFile('./videos/mergedFile.mp4')
.on('end', () => {
fs.unlink('./videos/test1.mp4', err => {
if (err) console.error(err)
})
fs.unlink('./videos/test2.mp4', err => {
if (err) console.error(err)
})
})
})()
File1 instantiates ffmpeg. The code is not run until ffmpegPromise is called with the '.run()' command. It starts at 3 seconds and takes 2 seconds of video. Then saves the 2 second video clip to file './videos/test1.mp4'.
File2 does the same thing except it starts at 10 seconds and takes 2 seconds of video. Then saves the 2 second video clip to file './videos/test2.mp4'.
ffmpegPromise passes the ffmpeg instantiated function to a promise, so it can be awaited.
Promise.all calls both 'file1' and 'file2' at the same time, and returns 'finished!' when files are saved.
ffpeg().mergeAdd()...mergeToFile(...) takes both 2 second video clips and merges them into one file, './videos/mergedFile.mp4'. When completed, the 'end' event is fired and fs.unlink will delete the test1 and test2 videos.

Related

fluent-ffmpeg performance - Taking screenshot for a huge amount of file on a network drive

I'm trying to scan all the videos inside a specific folder and create a thumbnail for each of them.
Here is the relevant code:
public async scan(): Promise<string[]> {
const files = await this.readFile(this._directoryPath);
const result: string[] = [];
for await (const file of files) {
result.push(file);
try {
ffmpeg(file)
.on("error", (err, stdout, stderr) => {
console.log(err.message);
})
.screenshots({
timestamps: [5],
filename: basename(file) + ".png",
folder: this._thumbnailPath,
});
} catch (e: any) {
logger.error("Failed taking screenshot for " + file + " with error " + e.message);
}
}
return result;
}
It is running fine a for small amount of video, but when I tried on a network path (\\Servername\some_folder) containing 2000 video files, my pc died after a moment. It manages to scan ~800 videos then everything crashed.
Is there a way to make that a background process ? I don't need to wait for it to be over. Or is it possible to run this chunk by chunk from one API call ?
I'm completely new to node.js so any help is appreciated.
Your problem seems to be that the code is trying to process all input files at once in parallel. As a result, you experience resource allocation issues.
From the usage of await, I assume that you are trying to handle this situation.
But the problem is that the ffmpeg(..).on(...).screenshots(..) code sequence returns immediately, without waiting for the screenshots to be created first. So, await is not working the way you want it to.
So, you need to wait for each video file to finish, then move on to the next file.
A solution would be to place the thumbnail generation inside a promise:
async function processFile(file) {
return new Promise((resolve, reject) => {
ffmpeg(file)
.screenshots(...YOUR_OPTIONS_HERE...)
.on("error", ...USE_REJECT_HERE.....)
.on("end", function() {
resolve(file);
});
})
}
And then, instead of doing:
result.push(file)
try {
ffmpeg(file)....
to do something like:
try {
result.push(await processFile(file));

Node Read Streams - How can I limit the number of open files?

I'm running into AggregateError: EMFILE: too many open files while streaming multiple files.
Machine Details:
MacOS Monterey,
MacBook Pro (14-inch, 2021),
Chip Apple M1 Pro,
Memory 16GB,
Node v16.13.0
I've tried increasing the limits with no luck.
Ideally I would like to be able to set the limit of the number of files open at one time or resolve by closing files as soon as they have been used.
Code below. I've tried to remove the unrelated code and replace it with '//...'.
const MultiStream = require('multistream');
const fs = require('fs-extra'); // Also tried graceful-fs and the standard fs
const { fdir } = require("fdir");
// Also have a require for the bz2 and split2 functions but editing from phone right now
//...
let files = [];
//...
(async() => {
const crawler = await new fdir()
.filter((path, isDirectory) => path.endsWith(".bz2"))
.withFullPaths()
.crawl("Dir/Sub Dir")
.withPromise();
for(const file of crawler){
files = [...files, fs.createReadStream(file)]
}
multi = await new MultiStream(files)
// Unzip
.pipe(bz2())
// Create chunks from lines
.pipe(split2())
.on('data', function (obj) {
// Code to filter data and extract what I need
//...
})
.on("error", function(error) {
// Handling parsing errors
//...
})
.on('end', function(error) {
// Output results
//...
})
})();
To prevent pre-opening a filehandle for every single file in your array, you want to only open the files upon demand when it's that particular file's turn to be streamed. And, you can do that with multi-stream.
Per the multi-stream doc, you can lazily create the readStreams by changing this:
for(const file of crawler){
files = [...files, fs.createReadStream(file)]
}
to this:
let files = crawler.map((f) => {
return function() {
return fs.createReadStream(f);
}
});
After reading over the npm page for multistream I think I have found something that will help. I have also edited where you are adding the stream to the files array as I don't see a need to instantiate a new array and spread existing elements like you are doing.
To lazily create the streams, wrap them in a function:
var streams = [
fs.createReadStream(__dirname + '/numbers/1.txt'),
function () { // will be executed when the stream is active
return fs.createReadStream(__dirname + '/numbers/2.txt')
},
function () { // same
return fs.createReadStream(__dirname + '/numbers/3.txt')
}
]
new MultiStream(streams).pipe(process.stdout) // => 123 ```
With that we can update your logic to include this functionality by simply wrapping the readStreams in functions, this way the streams will not be created until they are needed. This will prevent you from having too many open at once. We can do this by simply updating your file loop:
for(const file of crawler){
files.push(function() {
return fs.createReadStream(file)
})
}

electron fluent-ffmpeg mergeToFile() command promise not triggering

I am trying to use fluent-ffmpeg with my electron app to concatenate multiple audio files together with an image in a video. So if i have three files:
song1.mp3 1:00
song2.mp3 0:30
song3.mp3 2:00
front.jpg
I could create output.mp4 which would be 3:30 seconds long, and play each file one after the other in order. With front.jpg set as the background image.
I am trying to create the concatenated audio file first for this video, then I can just render a vid with two inputs; image and the 3:30second long concatenated audio file. But I'm having difficulty getting my electron app to wait for the ffmpeg job to run and complete.
I know how to do all of these ffmpeg jobs on the command-line, but I've been following this guide for how to package ffmpeg into an electron app that can run on mac/win10/linux environments. I'm developing it on win10 right now.
gur.com/LtykP.png
I have a button:
<button onClick='fullAlbum("upload-${uploadNumber}")'>FULLALBUM</button>
that when I click runs the fullAlbum() function that calls combineMp3FilesOrig to run the actual ffmpeg job:
async function fullAlbum(uploadName) {
//document.getElementById("buttonId").disabled = true;
//get table
var table = $(`#upload_${uploadNumber}_table`).DataTable()
//get all selected rows
var selectedRows = table.rows( '.selected' ).data()
//get outputFile location
var path = require('path');
var outputDir = path.dirname(selectedRows[0].audioFilepath)
//create outputfile
var timestamp = new Date().getUTCMilliseconds();
let outputFilepath = `${outputDir}/output-${timestamp}.mp3`
console.log('fullAlbum() button pressed: ', timestamp)
await combineMp3FilesOrig(selectedRows, outputFilepath, '320k', timestamp);
//document.getElementById("buttonId").disabled = false;
console.log(`fullAlbum() /output-${timestamp}.mp3 should be created now`)
}
function combineMp3FilesOrig(selectedRows, outputFilepath, bitrate, timestamp) {
console.log(`combineMp3FilesOrig(): ${outputFilepath}`)
//begin get ffmpeg info
const ffmpeg = require('fluent-ffmpeg');
//Get the paths to the packaged versions of the binaries we want to use
const ffmpegPath = require('ffmpeg-static').replace('app.asar','app.asar.unpacked');
const ffprobePath = require('ffprobe-static').path.replace('app.asar','app.asar.unpacked');
//tell the ffmpeg package where it can find the needed binaries.
ffmpeg.setFfmpegPath(ffmpegPath);
ffmpeg.setFfprobePath(ffprobePath);
//end set ffmpeg info
//create ffmpeg command
console.log(`combineMp3FilesOrig(): create command`)
const command = ffmpeg();
//set command inputs
command.input('C:\\Users\\marti\\Documents\\martinradio\\uploads\\CharlyBoyUTurn\\5. Akula (Club Mix).flac')
command.input('C:\\Users\\marti\\Documents\\martinradio\\uploads\\CharlyBoyUTurn\\4. Civilian Barracks.flac')
return new Promise((resolve, reject) => {
console.log(`combineMp3FilesOrig(): command status logging`)
command.on('progress', function(progress) {
console.info(`Processing : ${progress.percent} % done`);
})
.on('codecData', function(data) {
console.log('codecData=',data);
})
.on('end', function() {
console.log('file has been converted succesfully; resolve() promise');
resolve();
})
.on('error', function(err) {
console.log('an error happened: ' + err.message, ', reject()');
reject(err);
})
console.log(`combineMp3FilesOrig(): add audio bitrate to command`)
command.audioBitrate(bitrate)
console.log(`combineMp3FilesOrig(): tell command to merge inputs to single file`)
command.mergeToFile(outputFilepath);
console.log(`combineMp3FilesOrig(): end of promise`)
});
console.log(`combineMp3FilesOrig(): end of function`)
}
When I click my button once, my console.logs show the promise is entered, the command is created, but the function just ends without waiting for a resolve();
Waiting a couple minutes doesnt change anything.
If I press the button again:
A new command gets created, reaches the end of the promise, but this time actually starts, and triggers the previous command to start. Both jobs then run and their files are rendered at the correct length (12:08) and the correct quality (320k)
Is there something with my promise I need to fix involving async functions and promises in an electron app? I tried editing my ffmpeg command to include
command.run()
At the end of my promise to ensure it gets triggered; but that leads to an err in console saying Uncaught (in promise) Error: No output specified because apparently in fluent-ffmpeg command.mergeToFile(outputFilepath); isnt good enough and I need to include .output(outputFilepath) as well. If I change command.run() to command.output(outputFilepath).run(), when i click my button, the ffmpeg job gets triggered and rendered perfectly fine. EXCEPT THAT THE FILE IS ALWAYS 128kbps
So I'm trying to figure out why my included code block, my ffmpeg command doesn't run the first time when its created.
I've played about with this and I'm seeing the same issue with your original code, the file is being output with 128k bitrate.
This code seems to work, though the max bitrate I'm getting is 320k (I presume this is a limitation of the codec).
After testing again I think I'm getting the same behaviour as you are, in that the file takes some time to generate. If I return a Promise from the combineMp3FilesOrig function, then await in the click handler. I disable the button until the call is complete, obviously your button id will be different.
function combineMp3FilesOrig(selectedRows, outputFile, bitrate) {
const command = ffmpeg();
var count = selectedRows.length;
for(var i = 0; i < count; i++){
command.input(selectedRows[i].audioFilepath)
}
return new Promise((resolve, reject) => {
command.on('progress', function(progress) {
console.info(`Processing : ${progress.percent} % done`);
})
.on('codecData', function(data) {
console.log('codecData=',data);
})
.on('end', function() {
console.log('file has been converted succesfully');
resolve();
})
.on('error', function(err) {
console.log('an error happened: ' + err.message);
reject(err);
}).audioBitrate(bitrate).mergeToFile(outputFile);
});
}
async function convertFiles() {
document.getElementById("buttonId").disabled = true;
await combineMp3FilesOrig(selectedRows, 'output.mp3', '320k');
document.getElementById("buttonId").disabled = false;
}

Fluent-ffmpeg : Output with label 'screen0' doesnot exist in any diferent filter graph, or was already used elsewhere

I'm trying to take video frame by frame using fluent-ffmpeg, to create video masking and some kin of like experiment video editor. But when I do that by screenshots it says ffmpeg exited with code 1: Output with label 'screen0' does not exist in any defined filter graph, or was already used elsewhere.
Here example array that I use for producing timestamps. ["0.019528","0.05226","0.102188","0.13635","0.152138","0.186013","0.236149" ...]
// read json file that contains timestaps, array
fs.readFile(`${config.videoProc}/timestamp/1.json`, 'utf8', async (err, data) => {
if (err) throw new Error(err.message);
const timestamp = JSON.parse(data);
// screenshot happens here
// loop till there is nothing on array...
function takeFrame() {
command.input(`${config.publicPath}/static/video/1.mp4`)
.on('error', error => console.log(error.message))
.on('end', () => {
if (timestamp.length > 0) {
// screenshot again ...
takeFrame();
} else {
console.log('Process ended');
}
})
.noAudio()
.screenshots({
timestamps: timestamp.splice(0, 100),
filename: '%i.png',
folder: '../video/img',
size: '320x240',
});
}
// call the function
takeFrame();
});
My expected result are, I can genereta all the 600 screenshot. on one video. but the actual result is this error ffmpeg exited with code 1: Output with label 'screen0' does not exist in any defined filter graph, or was already used elsewhere and only 100 screen generated.
[UPDATE]
using -filter_complex as mentioned in here doenst work.
ffmpeg exited with code 1: Error initializing complex filters.
Invalid argument
[UPDATE]
command line arg :
ffmpeg -ss 0.019528 -i D:\Latihan\video-cms-core\public/static/video/1.mp4 -y -filter_complex scale=w=320:h=240[size0];[size0]split=1[screen0] -an -vframes 1 -map [screen0] ..\video\img\1.png
Turns out that using
ffmpeg().input(path) and ffmpeg(path) behave differently. Thats make duplicate on input frames. first command keeping old input frames and add it, and second command doesnt. so the second command works perfectly.
Thanks.
Working code :
function takeFrame() {
// notice that I inisitae this new ffmpeg, not using const command = new ffmpeg()
const screenshots = new ffmpeg(`${config.publicPath}/static/video/1.mp4`)
.on('error', error => console.log(error.message))
.on('end', () => {
if (timestamp.length > 0) {
// screenshot again ...
takeFrame();
} else {
console.log('Process ended');
}
})
.noAudio()
.screenshots({
timestamps: timestamp.splice(0, 100),
filename: '%i.png',
folder: '../video/img',
size: '320x240',
});
}
// call the function
takeFrame();
});

fluent-ffmpeg get codec data without specifying output

I am using fluent-ffmpeg node module for getting codec data from a file.
It works if I give an output but I was wondering if there is any option to run fluent-ffmpeg without giving to it an output.
This is what I am doing:
readStream.end(new Buffer(file.buffer));
var process = new ffmpeg(readStream);
process.on('start', function() {
console.log('Spawned ffmpeg');
}).on('codecData', function(data) {
//get recording duration
const duration = data.duration;
console.log(duration)
}).save('temp.flac');
As you can see I am saving the file to temp.flac so I can get the seconds duration of that file.
If you don't want to save the ffmpeg process result to a file, one thing that comes to mind is to redirect the command output to /dev/null.
In fact, as the owner of the fluent-ffmpeg repository said in one comment, there is no need to specify a real file name for the destination when using null format.
So, for example, something like that will work:
let process = new ffmpeg(readStream);
process
.addOption('-f', 'null') // set format to null
.on('start', function() {
console.log('Spawned ffmpeg');
})
.on('codecData', function(data) {
//get recording duration
let duration = data.duration;
console.log(duration)
})
.output('nowhere') // or '/dev/null' or something else
.run()
It remains a bit hacky, but we must set an output to avoid the "No output specified" error.
When no stream argument is present, the pipe() method returns a PassThrough stream, which you can pipe to somewhere else (or just listen to events on).
var command = ffmpeg('/path/to/file.avi')
.videoCodec('libx264')
.audioCodec('libmp3lame')
.size('320x240')
.on('error', function(err) {
console.log('An error occurred: ' + err.message);
})
.on('end', function() {
console.log('Processing finished !');
});
var ffstream = command.pipe();
ffstream.on('data', function(chunk) {
console.log('ffmpeg just wrote ' + chunk.length + ' bytes');
});

Resources