first m3u8 ts segment not working after mp4 to m3u8 conversion by node js - node.js

index.js
const ffmpegPath = require('#ffmpeg-installer/ffmpeg').path;
const ffmpeg = require('fluent-ffmpeg');
const process = require('process');
const args = process.argv.slice(2);
if (args.length !== 4) {
console.error('Incorrect number of arguments');
process.exit(1);
}
const startTime = args[0];
const timeDuration = args[1];
const inputFile = args[2];
const outputFile=args[3];
ffmpeg.setFfmpegPath(ffmpegPath);
ffmpeg(inputFile)
.setStartTime(startTime)
.setDuration(timeDuration)
.output(outputFile)
.outputOptions('-hls_list_size 0')
.on('end', function(err) {
if(!err) { console.log('conversion Done') }
})
.on('error', function(err){
console.log('error: ', err)
}).run();
Here is the index.js and I'm running it by hitting the command on the terminal
node index.js 5 40 ./input.mp4 ./output.m3u8
Here 5 is for starting time and 40 is the time duration in seconds. The process creates m3u8 with ts files but the first ts file isn't getting created properly. It's been created in kb format while all the other files in mb format.
the output_test0 isn't getting generated properly and so that's why while playing the m3u8 file, the first few seconds is just static picture. This issue has been happening with the first ts output only. Any trick on how to fix it?

Following your comment about how it seems to be caused by the use of input seeking instead of output seeking:
Use seek() or seekOutput() instead of setStartTime(). The documentation describes the difference:
seek(time): seek output
Aliases: seekOutput().
Seeks streams before encoding them into the output. This is different from calling seekInput() in that the offset will only apply to one output. This is also slower, as skipped frames will still be decoded (but dropped).
The time argument may be a number (in seconds) or a timestamp string (with format [[hh:]mm:]ss[.xxx]).
ffmpeg('/path/to/file.avi')
.seekInput('1:00')
.output('from-1m30s.avi')
.seek(30)
.output('from-1m40s.avi')
.seek('0:40');
setStartTime() is an alias for seekInput(). From the same documentation:
seekInput(time): set input start time
Alias: setStartTime().
Seeks an input and only start decoding at given time offset.
Note that seek() or seekOutput() should be applied to the output and not on the input as seekInput(), i.e. after output().

Related

use .sfz soundfonts to render audio with WebMScore

I'm using WebMScore to render audio of music scores (it's a fork of MuseScore that runs in the browser or node).
I can successfully load my own, local .sf2 or .sf3 files, however
Trying to load an .sfz soundfont throws error 15424120. (And error.message is simply 'undefined'.)
Unlike .sf2 and .sf3, which contain the sounds and instructions in a single file, the .sfz format is just a text instruction file that refers to a separate folder of samples.
The reason I need the .sfz is that I need to be able to edit the .sfz file textually and programatically without an intervening Soundfont generator.
Is there a way to use .sfz's? Do I need to specify Zerberus (the Musescore .sfz player)? Do I need a different file structure? Please see below.
My environment is node js, with the following test case and file structure:
File Structure
Project Folder
app.js
testScore.mscz
mySFZ.sfz
samples
one.wav
two.wav
etc.wav
Test Case (Works with .sf3 , errors with .sfz)
const WebMscore = require('webmscore');
const fs = require('fs');
// free example scores available at https://musescore.com/openscore/scores
const name = 'testScore.mscz';
const exportedPrefix = 'exported';
const filedata = fs.readFileSync(`./${name}`);
WebMscore.ready.then(async () => {
const score = await WebMscore.load('mscz', filedata, [], false);
await score.setSoundFont(fs.readFileSync('./mySFZ.sfz'));
try { fs.writeFileSync(`./${exportedPrefix}.mp3`, await score.saveAudio('mp3')); }
catch (err) { console.log(err) }
score.destroy();
});

How to append to a file in Node.js but limit the file to a certain size

I would like to truncate a file by newline \n so that it only grows to some max number of lines. How do I do that with something like fs.appendFileSync?
You can address this problem by investigating readline API from node:
const fs = require('fs');
const readline = require('readline');
async function processLineByLine() {
const fileStream = fs.createReadStream('input.txt');
const rl = readline.createInterface({
input: fileStream,
crlfDelay: Infinity
});
for await (const line of rl) {
// count your lines in the file
// you can copy into output stream the content
// of every line till it did not pass the max line number
}
// if the counter is not yet finished using
// rl.write() you can continue appending to the file
}
processLineByLine();
A second idea very similar to this one was answered here:
Parsing huge logfiles in Node.js - read in line-by-line

Pass multiple input files to ffmpeg using a single stream in Node

I'm trying to use ffmpeg to merge multiple video files. Every file has the same encoding, and they just need to be stitched together. The problem I'm having is that I'd like to do this using streams, but ffmpeg only supports one input stream per command.
Since the files have the same encoding, I thought I could merge them into a single stream, and feed it as an input to ffmpeg.
const CombinedStream = require("combined-stream")
const ffmpeg = require("fluent-ffmpeg")
const AWS = require("aws-sdk")
const s3 = new AWS.S3()
const merge = ({ videos }) => {
const combinedStream = CombinedStream.create();
videos //I take my videos from S3 and merge them
.map((video => {
return s3
.getObject({
Bucket: "myAWSBucketName",
Key: video
})
.createReadStream()
}))
.forEach(stream => {
combinedStream.append(stream)
})
ffmpeg()
.input(combinedStream)
.save("/tmp/file.mp4")
}
merge({ videos: ["video1.mp4", "video2.mp4"]})
I was hoping ffmpeg could read the files from the single stream and output them together, but I got this error instead:
Error: ffmpeg exited with code 1: pipe:0: Invalid data found when processing input
Cannot determine format of input stream 0:0 after EOF
Error marking filters as finished
Conversion failed!
Can anyone help me?

How to solve output underflow error using naudiodon / portaudio?

I am writing a small node.js program that will be able to play wav sound files on a chosen audio device.
The sound starts well but it is stoped before the end of the file.
Here is my code :
const fs = require("fs");
const wav = require("wav");
const portAudio = require("naudiodon");
const ao = new portAudio.AudioIO({
outOptions: {
channelCount: 2,
sampleFormat: portAudio.SampleFormat24Bit,
sampleRate: 44100,
}
});
const name = "myfile.wav";
const file = fs.createReadStream(`./sounds/${name}`);
const reader = new wav.Reader();
reader.on("format", () => {
reader.pipe(ao);
ao.start();
});
file.pipe(reader);
process.on("SIGINT", ao.quit);
When I modify the highWaterMark option of fs.createReadStream, it slightly change the cut position in the sound but it never goes until the end of it.
I always get a portAudio status - output underflow log error.
Thanks for any help !
I have been experiencing a similar error, and my solution was to manually write to the AudioIO stream instead of using the pipe commands.
So instead of
reader.on("format", () => {
reader.pipe(ao);
ao.start();
});
You would use
ao.start();
reader.on("data",chunk=>ao.write(chunk));
Output underflow is generally not an issue, but to avoid it I initialised a new instance of PortAudio before playing every file, however that is only applicable if you don't care about slight latency.

node.js - use archiver where output is buffer

I want to zip a few readeableStreams into a writableStream.
the purpose is to do all in memory and not to create an actual zip file on disk.
for that i'm using archiver
let bufferOutput = Buffer.alloc(5000);
let archive = archiver('zip', {
zlib: { level: 9 } // Sets the compression level.
});
archive.pipe(bufferOutput);
archive.append(someReadableStread, { name: test.txt});
archive.finalize();
I get an error on line archive.pipe(bufferOutput);.
This is the error: "dest.on is not a function"
what am i doing wrong?
Thx
UPDATE:
I'm running the following code for testing and the ZIP file is not created properly. what am I missing?
const fs = require('fs'),
archiver = require('archiver'),
streamBuffers = require('stream-buffers');
let outputStreamBuffer = new streamBuffers.WritableStreamBuffer({
initialSize: (1000 * 1024), // start at 1000 kilobytes.
incrementAmount: (1000 * 1024) // grow by 1000 kilobytes each time buffer overflows.
});
let archive = archiver('zip', {
zlib: { level: 9 } // Sets the compression level.
});
archive.pipe(outputStreamBuffer);
archive.append("this is a test", { name: "test.txt"});
archive.finalize();
outputStreamBuffer.end();
fs.writeFile('output.zip', outputStreamBuffer.getContents(), function() { console.log('done!'); });
In your updated example, I think you are trying to get the contents before it has been written.
Hook into the finish event and get the contents then.
outputStreamBuffer.on('finish', () => {
// Do something with the contents here
outputStreamBuffer.getContents()
})
A Buffer is not a stream, you need something like https://www.npmjs.com/package/stream-buffers
As for why you are seeing garbage, this is because what you are seeing is the zipped data, which will seem like garbage.
To verify if the zipping has worked, you probably want to unzip it again and check if the output matches the input.
By adding the event listener on archiver works for me:
archive.on('finish', function() {
outputStreamBuffer.end();
// write your file
});

Resources