Turning sox command line Duration into a node.js variable. - node.js

I would like find the duration of an audio.wav and use the duration as a variable in node.js script.
I can get the duration of a wav on the command line using sox:
$ soxi -D audio.wav
returns:
7.870975
How is it possible to run soxi -D in a node.js file?

OK, I think I solved this using a child_process:
var exec = require("child_process").exec;//require child_process.exec
exec("soxi -D /path/to/audio.wav", function(err, stdout){
if (err){
throw err;
}
var audioDuration = stdout * 1000; //takes audio length in seconds and converts to milliseconds
console.log("This is the Audio Duration:" + audioDuration);//Prints
});

Related

ffmpeg nodejs lambda crop issue

I have a lambda function to crop videos with ffmpeg
I am installing the layer this way
#!/bin/bash
mkdir -p layer
cd layer
rm -rf *
curl -O https://johnvansickle.com/ffmpeg/builds/ffmpeg-git-amd64-static.tar.xz
tar -xf ffmpeg-git-amd64-static.tar.xz
mv ffmpeg-git-*-amd64-static ffmpeg
rm ffmpeg-git-amd64-static.tar.xz
I do not really which version but should be recent as I did it today for the last time
Then my node js lambda function is running with the following nodejs module https://github.com/fluent-ffmpeg/node-fluent-ffmpeg
return new Promise((resolve, reject) => {
ffmpeg(inputFile.name)
.videoFilters(`crop=${width}:${height}:${x}:${y}`)
.format('mp4')
.on('error', reject)
.on('end', () => resolve(fs.readFileSync(outputFile.name)))
.save(outputFile.name);
with for example videoFilters('crop=500:500:20:20')
And I have the folowing error
ffmpeg exited with code 1: Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #0:0
Conversion failed!
On my local computer I am running the following command on the exact same image
ffmpeg -i in.mp4 -filter:v "crop=500:500:20:20" out.mp4
my version of ffmpeg is 4.2.2 and this is working great
I do not have the issue with all videos, here one video which is causing me the issue https://ajouve-util.s3.amazonaws.com/earth.mp4
Here is slightly modified code that works:
var ffmpeg = require('fluent-ffmpeg');
var reject = function(something) {
console.error("got error", something);
}
var resolve = function() {
console.info("Good, job done. Now read output file");
}
var width = 500;
var height = 500;
var x = 20;
var y = 20;
var filter_string = `crop=${width}:${height}:${x}:${y}`;
console.log("filter_string", filter_string);
var onStart = function(commandLine) {
console.log('Spawned Ffmpeg with command: ' + commandLine);
}
var command = ffmpeg('earth.mp4')
.videoFilters(filter_string)
.format('mp4')
.on('error', reject)
.on('end', resolve)
.on('start', onStart)
.output('earth_cropped.mp4');
console.log("running..");
command.run();
console.log("Done");
With earth.mp4 downloaded from your link in same dir output is:
node main.js
filter_string crop=500:500:20:20
running..
Done
Spawned Ffmpeg with command: ffmpeg -i earth.mp4 -y -filter:v crop=500:500:20:20 -f mp4 earth_cropped.mp4
Good, job done. Now read output file
And we get earth_cropped.mp4 file as expected.
I bet the error is in input params, so add .on('start', onStart) in your code and check logs to see exact command used to validate it ;)

Piping ffmpeg thumbail output to another program

I'm trying to capture frames from a live video stream (h.264) and pipe the resulting JPG images to a Node JS script, instead of saving these individual frames directly to .jpg files.
As a test, I created the following Node JS script, to simply capture the incoming piped data, then dump it to a file:
// pipe.js - test pipe output
var fs = require('fs');
var data = '';
process.stdin.resume();
process.stdin.setEncoding('utf8');
var filename = process.argv[2];
process.stdin.on('data', (chunk) => {
console.log('Received data chunk via pipe.');
data += chunk;
});
process.stdin.on('end', () => {
console.log('Data ended.');
fs.writeFile(filename, data, err => {
if (err) {
console.log('Error writing file: error #', err);
}
});
console.log('Saved file.');
});
console.log('Started... Filename = ' + filename);
Here's the ffmpeg command I used:
ffmpeg -vcodec h264_mmal -i "rtsp://[stream url]" -vframes 1 -f image2pipe - | node pipe.js test.jpg
This generated the following output, and also produced a 175kB file which contains garbage (unreadable as a jpg file anyway). FYI using ffmpeg to export directly to a jpg file produced files around 25kB in size.
...
Press [q] to stop, [?] for help
[h264_mmal # 0x130d3f0] Changing output format.
Input stream #0:0 frame changed from size:1280x720 fmt:yuvj420p to size:1280x720 fmt:yuv420p
[swscaler # 0x1450ca0] deprecated pixel format used, make sure you did set range correctly
Received data chunk via pipe.
Received data chunk via pipe.
frame= 1 fps=0.0 q=7.7 Lsize= 94kB time=00:00:00.40 bitrate=1929.0kbits/s speed=1.18x
video:94kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000000%
Received data chunk via pipe.
Data ended.
Saved file.
You can see that the Node JS script is receiving piped data (per the "Received data via pipe" messages above. However, it doesn't seem to be outputting a valid JPG file. I can't find a way to specifically request that ffmpeg output JPG format, since there is no -vcodec option for JPG. I tried using -vcodec png and outputting to a .png file, but the resulting file was about 2MB in size and also unreadable as a png file.
Is this a problem caused by using utf8 encoding, or am I doing something else wrong?
Thanks for any advice.
UPDATE: OK I got it to send a single jpg image correctly. The issue was in the way Node JS was capturing the stream data. Here's a working script:
// pipe.js - capture piped binary input and write to file
var fs = require('fs');
var filename = process.argv[2];
console.log("Opening " + filename + " for binary writing...");
var wstream = fs.createWriteStream(filename);
process.stdin.on('readable', () => {
var chunk = '';
while ((chunk = process.stdin.read()) !== null) {
wstream.write(chunk); // Write the binary data to file
console.log("Writing chunk to file...");
}
});
process.stdin.on('end', () => {
// Close the file
wstream.end();
});
However, now the problem is this: when piping the output of ffmpeg to this script, how can I tell when one JPG file ends and another one begins?
ffmpeg command:
ffmpeg -vcodec h264_mmal -i "[my rtsp stream]" -r 1 -q:v 2 -f singlejpeg - | node pipe.js test_output.jpg
The test_output.jpg file continues to grow as long as the script runs. How can I instead know when the data for one jpg is complete and another one has started?
According to this, jpeg files always start with FF D8 FF and end with FF D9, so I guess I can check for this ending signature and start a new file at that point... any other suggestions?

merge multiple videos to a single .mp4 files using fluent-ffmpeg

Version information
fluent-ffmpeg version: 2.1.2
ffmpeg version:4
OS:linux mint
Code to reproduce
var fluent_ffmpeg = require("fluent-ffmpeg");
var mergedVideo = fluent_ffmpeg();
mergedVideo
.mergeAdd('./Video1.mp4')
.mergeAdd('./Video2.mp4')
// .inputOptions(['-loglevel error','-hwaccel vdpau'])
// .outputOptions('-c:v h264_nvenc')
.on('error', function(err) {
console.log('Error ' + err.message);
})
.on('end', function() {
console.log('Finished!');
})
.mergeToFile('./mergedVideo8.mp4', '/tmp');
When I run this code, then I get conversion failed error.
Observed results
Error ffmpeg exited with code 1: Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #3:0
Conversion failed!
I have tried the same conversion using the command line:
ffmpeg -f concat -i textfile -c copy -fflags +genpts merged8.mp4
Where textfile has the following content-
file 'Video1.mp4'
file 'Video2.mp4'
And I was able to concatenate the video file. But I want to get the same result using fluent-ffmpeg.

fluent-ffmpeg thumbnail creation error

i try to create a video thumbnail with fluent-ffmpeg here is my code
var ffmpeg = require('fluent-ffmpeg');
exports.thumbnail = function(){
var proc = new ffmpeg({ source: 'Video/express2.mp4',nolog: true })
.withSize('150x100')
.takeScreenshots({ count: 1, timemarks: [ '00:00:02.000' ] }, 'Video/', function(err, filenames) {
console.log(filenames);
console.log('screenshots were saved');
});
}
but i keep getting this error
"mate data contains no duration, aborting screenshot creation"
any idea why,
by the way am on windows, and i put the ffmpeg folder in c/ffmpeg ,and i added the ffmpeg/bin in to my environment varableļ¼Œ i dont know if fluent-ffmpeg need to know the path of ffmpeg,but i can successfully create a thumbnail with the code below
exec("C:/ffmpeg/bin/ffmpeg -i Video/" + Name + " -ss 00:01:00.00 -r 1 -an -vframes 1 -s 300x200 -f mjpeg Video/" + Name + ".jpg")
please help me!!!
I think the issue can be caused by the .withSize('...') method call.
The doc says:
It doesn't interract well with filters. In particular, don't use the size() method to resize thumbnails, use the size option instead.
And the size() method is an alias of withSize().
Also - but this is not the problem in your case - you don't need to set either the count and the timemarks at the same time. The doc says:
count is ignored when timemarks or timestamps is specified.
Then you probably could solve with:
const ffmpeg = require('fluent-ffmpeg');
exports.thumbnail = function(){
const proc = new ffmpeg({ source: 'Video/express2.mp4',nolog: true })
.takeScreenshots({ timemarks: [ '00:00:02.000' ], size: '150x100' }, 'Video/', function(err, filenames) {
console.log(filenames);
console.log('screenshots were saved');
});
}
Have a look at the doc:
https://github.com/fluent-ffmpeg/node-fluent-ffmpeg#screenshotsoptions-dirname-generate-thumbnails
FFmpeg needs to know the duration of a video file, while most videos have this information in the file header some file don't, mostly raw videos like a raw H.264 stream.
A simple solution could be to remux the video prior to take the snapshot, the FFmpeg 0.5 command for this task it's quite simple:
ffmpeg -i input.m4v -acodec copy -vcodec copy output.m4v
This command tells FFmpeg to read the "input.m4v" file, to use the same audio encoder and video encoder (no encoding at all) for the output, and to output the data into the file output.m4v.
FFmpeg automatically adds all extra metadata/header information needed to take the snapshot later.
Try this code to create thumbnails from Video
// You have to Install Below packages First
var ffmpegPath = require('#ffmpeg-installer/ffmpeg').path;
var ffprobePath = require('#ffprobe-installer/ffprobe').path;
var ffmpeg = require('fluent-ffmpeg');
ffmpeg.setFfmpegPath(ffmpegPath);
ffmpeg.setFfprobePath(ffprobePath);
var proc = ffmpeg(sourceFilePath)
.on('filenames', function(filenames) {
console.log('screenshots are ' + filenames.join(', '));
})
.on('end', function() {
console.log('screenshots were saved');
})
.on('error', function(err) {
console.log('an error happened: ' + err.message);
})
// take 1 screenshots at predefined timemarks and size
.takeScreenshots({ count: 1, timemarks: [ '00:00:01.000' ], size: '200x200' }, "Video/");

how to generate video thumbnail in node.js?

I am building an app with node.js, I successfully uploaded the video, but I need to generate a video thumbnail for it. Currently I use node exec to execute a system command of ffmpeg to make the thumbnail.
exec("C:/ffmpeg/bin/ffmpeg -i Video/" + Name + " -ss 00:01:00.00 -r 1 -an -vframes 1 -f mjpeg Video/" + Name + ".jpg")
This code is coming from a tutorial from http://net.tutsplus.com/tutorials/javascript-ajax/how-to-create-a-resumable-video-uploade-in-node-js/
the code above did generate a jpg file but it's not a thumbnail but a video screen shot, I wonder is there any other method to generate video thumbnail, or how to exec the ffmpeg command to make a real thumbnail (resized), and I prefer png file.
Reference to GitHub fluent-ffmpeg project.
Repeating example from original StackOverflow answer:
var proc = new ffmpeg('/path/to/your_movie.avi')
.takeScreenshots({
count: 1,
timemarks: [ '600' ] // number of seconds
}, '/path/to/thumbnail/folder', function(err) {
console.log('screenshots were saved')
});
Resize by adding a -s widthxheight option to your command.
There is a node module for this:
video-thumb
It basically just wraps a call to exec ffmpeg
I recommend using https://www.npmjs.com/package/fluent-ffmpeg to call ffmpeg from Node.js
Using media-thumbnail, you can easily generate thumbnails from your videos. The module basically wraps the ffmpeg thumbnail functionality.
const mt = require('media-thumbnail')
mt.forVideo(
'./path/to/video.mp4',
'./path/to/thumbnail.png', {
width: 200
})
.then(() => console.log('Success'), err => console.error(err))
You can also create thumbnails from your images using this package.
Instead I would recommend using thumbsupply. In addition to provide you with thumbnails, it caches them to improve performance significantly.
npm install --save thumbsupply
After installing the module, you can use it in a following way.
const thumbsupply = require('thumbsupply')("com.example.application");
thumbsupply.generateThumbnail('some-video.mp4')
.then(thumb => {
// serve thumbnail
})
app.post('/convert', upload.any(), (req, res) => {
console.log("calling", req.files)
let thumbNailName = req.files[0].filename.split('.')
var gm = require('gm');
gm('./src/Upload/'+req.files[0].filename)// get pdf file from storage folder
.thumb(
50, // Width
50, // Height
'./src/thumbnail/'+thumbNailName[0]+'.png', // Output file name
80, // Quality from 0 to 100
function (error, stdout, stderr, command) {
if (!error) {
console.log("processing");
} else {
console.log("error")
}
}
);
})

Resources