I have one process, which can receive two stdin's
this.child = child_process.spawn(
'command',
[ '-', 'pipe:3' ],
{ 'stdio' => [null, null, null, ???unknown] }
);
this.child.stdin.write(data);
this.child.stdio[3]??.write(anotherData); //This is the unknown part.
Is it possible to create two stdins, without creating another child process?
Problem explanation: My problem actually is with ffmpeg, I'm spawning ffmpeg instance with audio input and video input, and they both need separate input pipe's ,e.g., pipe:0 (stdin) is for video, and pipe:3 (another stdin, but for audio data), because pipe:1 (stdout), and pipe:2 (stderr
Related
I'm facing an issue with the seeked event in Chrome. The issue seems to be due to how the video being seeked is encoded.
The problem seems to occur most frequently when using ytdl-core and piping a Readable stream into an FFMPEG child process.
let videoStream: Readable = ytdl.downloadFromInfo(info, {
...options,
quality: "highestvideo"
});
With ytdl-core in order to get the highest quality you must combine the audio and video. So here is how I am doing it.
const ytmux = (link, options: any = {}) => {
const result = new stream.PassThrough({
highWaterMark: options.highWaterMark || 1024 * 512
});
ytdl.getInfo(link, options).then((info: videoInfo) => {
let audioStream: Readable = ytdl.downloadFromInfo(info, {
...options,
quality: "highestaudio"
});
let videoStream: Readable = ytdl.downloadFromInfo(info, {
...options,
quality: "highestvideo"
});
// create the ffmpeg process for muxing
let ffmpegProcess: any = cp.spawn(
ffmpegPath.path,
[
// supress non-crucial messages
"-loglevel",
"8",
"-hide_banner",
// input audio and video by pipe
"-i",
"pipe:3",
"-i",
"pipe:4",
// map audio and video correspondingly
// no need to change the codec
// output mp4 and pipe
"-c:v",
"libx264",
"-x264opts",
"fast_pskip=0:psy=0:deblock=-3,-3",
"-preset",
"veryslow",
"-crf",
"18",
"-c",
"copy",
"-pix_fmt",
"yuv420p",
"-movflags",
"frag_keyframe+empty_moov",
"-g",
"300",
"-f",
"mp4",
"-map",
"0:v",
"-map",
"1:a",
"pipe:5"
],
{
// no popup window for Windows users
windowsHide: true,
stdio: [
// silence stdin/out, forward stderr,
"inherit",
"inherit",
"inherit",
// and pipe audio, video, output
"pipe",
"pipe",
"pipe"
]
}
);
audioStream.pipe(ffmpegProcess.stdio[4]);
videoStream.pipe(ffmpegProcess.stdio[3]);
ffmpegProcess.stdio[5].pipe(result);
});
return result;
};
I am playing around with tons of different arguments. The result of this video gets uploaded to a Google Bucket. Then when seeking in Chrome I am getting some issues with certain frames, they are not being seeked.
When I pass it through FFMPEG locally and re-encode it, then upload it, I notice there are no issues.
Here is an image comparing the two results when running ffmpeg -i FILE (the one on the left works fine and the differences are minor)
I tried adjusting the arguments in the muxer code and am continuing to try and compare with the re-encoded video. I have no idea why this is happening, something to do with the frames.
In node when I try and spawn a child process and then listen to stdout stream data and send it to process.stdout ansi colours are stripped:
// Will not preserve tty colors
const cp = spawn(procExec, ['--production'])
cp.stdout.on('data', (buf) => {
// can manipulate buf
process.stdout.write(buf)
});
cp.stderr.on('data', (buf) => {
// can manipulate buf
process.stderr.write(buf)
});
// Also will not preserve tty colors
const cp = spawn(procExec, ['--production'])
cp.stdout.pipe(process.stdout);
cp.stderr.pipe(process.stderr);
Looking through the node docs and the standard solution is to use the various possibilities for the stdio option:
const cp = spawn('ls', ['-l'], {
stdio: 'inherit'
})
or
const cp = spawn('ls', ['-l'], {
stdio: [0,1,2]
})
or
const cp = spawn('ls', ['-l'], {
stdio: [process.stdin,process.stdout,process.stderr]
})
This WILL preserve ANSI colors in a terminal.
Normally this is fine however this means it is impossible to manipulate the output of the stream before it is sent to process.stdout or process.stderr.
a. Why does piping to child_process.stdout to process.stdout strip ANSI colors? Is it the same reason that listening to the data event does the same?
b. How can I both manipulate manipulate the output (ie. change the text) of the stream WHILST keeping the colors at the same time?
I am not an expert on these by any means but my take away from one of the answers in this thread is that node by default doesn't offer an actual tty stream when using spawn. Why? Because doing so is expensive and varies system by system and shell by shell. Instead it is maintained as a package for those that need it https://www.npmjs.com/package/node-pty This actually makes sense since it can be large and some users might not even have rights to get a tty shell and so they made non tty the default for spawn. It works for many cases and means less code for node core. At the end of the day it is probably worth just using that package if you need it. I don't know if you can modify the stream but if you do then you are going to need to deal with output that has ansi color codes in it which is trickier than just your standard string parsing.
Sorry for a repeating topic, but i've searched and experimented for 2 days now and i haven't been able to solve the problem.
I am trying to live stream pictures every 1 second to a client via socket.io-stream using the following code:
var args = [
"-i",
"/dev/video0",
"-s",
"1280x720",
"-qscale",
1,
"-vf",
"fps=1",
config.imagePath,
"-s",
config.imageStream.resolution[0],
"-f",
"image2pipe",
"-qscale",
1,
"-vf",
"fps=1",
"pipe:1"
];
camera = spawn("avconv", args); // avconv = ffmpeg
The settings are good, and the process writes to stdout successfully. I capture all outgoing image data using this simplified code:
var ss = require("socket.io-stream");
camera.stdout.on("data", function(data) {
var stream = ss.createStream();
ss(socket).emit("img", stream, "newImg");
// how do i write the data-object to the stream?
// fs.createReadStream(imagePath).pipe(stream);
});
"socket" comes from the client using the socket.io-package, no problem there. So what i am doing is that i listen to the stdout-pipe for the "data" event. That data gets passed to the function above. That means that at this stage "data" is not a stream, its a "<Buffer .. .. .."-object, and therefore i cannot stream it like i could previously using the commented createReadStream-statement where i read the image from disk. How do i stream the data (Buffer at this stage) to the client? Can i do this differently, perhaps not using socket.io-stream? "data" is just one part of the whole image, so perhaps two or three "data"-objects need to be put together to form the complete image.
I tried using "stream.write(data, "binary");" which did transfer the Buffer-objects, problem is that there is not end of stream-event and therefore i do not know when an image is complete. I tried registering to stdout.on "close", "end", "finish", nothing triggers. Am i missing something? Am i making it overly complex? The reasoning behind my implementation is that i need a new stream for each complete image, is that right?
Thanks alot!
In order to convert PCM audio to MP3 I'm using the following:
function spawnFfmpeg() {
var args = [
'-f', 's16le',
'-ar', '48000',
'-ac', '1',
'-i', 'pipe:0',
'-acodec', 'libmp3lame',
'-f', 'mp3',
'pipe:1'
];
var ffmpeg = spawn('ffmpeg', args);
console.log('Spawning ffmpeg ' + args.join(' '));
ffmpeg.on('exit', function (code) {
console.log('FFMPEG child process exited with code ' + code);
});
ffmpeg.stderr.on('data', function (data) {
console.log('Incoming data: ' + data);
});
return ffmpeg;
}
Then I pipe everything together:
writeStream = fs.createWriteStream( "live.mp3" );
var ffmpeg = spawnFfmpeg();
stream.pipe(ffmpeg.stdin);
ffmpeg.stdout.pipe(/* destination */);
The thing is... Now I want to merge (overlay) two streams into one. I already found how to do it with ffmpeg: How to overlay two audio files using ffmpeg
But, the ffmpeg command expects two inputs and so far I'm only able to pipe one input stream into the pipe:0 argument. How do I pipe two streams in the spawned command? Would something like ffmpeg -i pipe:0 -i pipe:0... work? How would I pipe the two incoming streams with PCM data (since the command expects two inputs)?
You could use named pipes for this, but that isn't going to work on all platforms.
I would instead do the mixing in Node.js. Since your audio is in normal PCM samples, that makes this easy. To mix, you simply add them together.
The first thing I would do is convert your PCM samples to a common format... 32-bit float. Next, you'll have to decide how you want to handle cases where both channels are running at the same time and both are carrying loud sounds such that the signal will "clip" by exceeding 1.0 or -1.0. One option is to simply cut each channel's sample value in half before adding them together.
Another option, depending on your desired output, is to let it exceed the normal range and pass it to FFmpeg. FFmpeg can take in 32-bit float samples. There, you can apply proper compression/limiting to bring the signal back under clipping before encoding to MP3.
I have this code planning to output a video containing vid1 and vid2 side by side. So I add a padding to the right of vid1 and tried to use overlay to put vid2 on that space but instead the output video shows a duplicate of vid1 to the right. Can someone please tell me what is wrong with my code and how to fix it? Thanks
ffmpeg("vid1.mp4")
.input("vid2.mp4")
.complexFilter([
"scale=300:300[rescaled]",
{
filter:"pad",options:{w:"600",h:"300"},
inputs:"rescaled",outputs:"padded"
},
{
filter:"overlay", options:{x:"300",y:"0"},
inputs:["padded","vid2.mp4"],outputs:"output"
}
], 'output')
.output("output.mp4")
.on("error",function(er){
console.log("error occured: "+er.message);
})
.on("end",function(){
console.log("success");
})
.run();
I used following code in a previous project to do the same thing:
ffmpeg()
.input("vid1.mp4")
.input("vid2.mp4")
.complexFilter([
'[0:v]scale=300:300[0scaled]',
'[1:v]scale=300:300[1scaled]',
'[0scaled]pad=600:300[0padded]',
'[0padded][1scaled]overlay=shortest=1:x=300[output]'
])
.outputOptions([
'-map [output]'
])
.output("output.mp4")
.on("error",function(er){
console.log("error occured: "+er.message);
})
.on("end",function(){
console.log("success");
})
.run();
Note that in this case, any audio from the video is disregarded and dropped. If you want audio as well, you will have to add complex mixdown filters that use the [0:a] and [1:a] channels as input.
The -map parameter in the outputOptions list tells the ffmpeg project to map the variable output into the output.mp4 file. If you need audio, you will have to add another -map parameter to the outputOptions as well for the audio.