I'm trying to compress an image with pngquant. Here is the code:
let output = '';
const quant = cp.spawn('pngquant', ['256', '--speed', '10'], {
stdio: [null, null, 'ignore'],
});
quant.stdout.on('data', data => output += data);
quant.on('close', () => {
fs.writeFileSync('image.png', output);
fs.writeFileSync('image_original.png', image);
process.exit(0);
});
quant.stdin.write(image);
image is a Buffer with pure PNG data.
The code works, however, for some reason, it generates incorrect PNG. Not only that, but also it's size is more than original's.
When I execute this from the terminal, I get excellent output file:
pngquant 256 --speed 10 < image_original.png > image.png
I have no idea of what's going on; the data in output file seems pretty PNG-ish.
EDIT: I have managed to make it work:
let output = [];
quant.stdout.on('data', data => output.push(data));
quant.stdin.write(image);
quant.on('close', () => {
const image = Buffer.concat(output);
fs.writeFileSync('image.png', image);
});
I assume that is related to how strings are represented in the NodeJS. Would be happy to get some explanation.
Related
When I run a little node program and I break the output through a pipe, the bash terminal output remains hidden and I'm forced to run reset (which works every time). How should I restore correctly after a broken pipe to avoid going through reset?
The program:
const { unmarshall } = require("#aws-sdk/util-dynamodb");
const fs = require('fs');
(async () => {
const input = fs.readFileSync(process.argv[2], 'utf-8');
const records = JSON.parse(input);
if (records.Items) {
records.Items = records.Items.map((a) => unmarshall(a));
}
process.stdout.on('error', function( err ) {
if (err.code === 'EPIPE') {
process.exit(0);
}
});
process.stdout.write(JSON.stringify(records, undefined, 2));
})();
And when I run this program like this and exit from less via a q keystroke, subsequent terminal output is hidden (after exiting the JS program and breaking the pipe). Output is restored via reset:
node example.js dynamo_output.json | less
# no terminal output is visible
$ reset
# output is restored
This appears to work:
const fs = require('fs');
const writeStdoutSync = (str) => {
fs.writeSync(process.stdout.fd, str);
}
I want to use the gltf-transform command "merge" in my script and wrote something like this to merge two or more gltf files.
const { merge } = require('#gltf-transform/cli');
const fileA = '/model_path/fileA.gltf';
const fileB = '/model_path/fileB.gltf';
merge(fileA, fileB, output.gltf, false);
But nothing happened. No output file or console log. So I dont know where to continue my little journey.
Would be great when someone has a clue.
An alternative was...
const command = "gltf-transform merge("+fileA+", "+fileB+", output.gltf, false)";
exec(command, (error, stdout, stderr) => {
if (error) {
console.log(`error: ${error.message}`);
return;
}
if (stderr) {
console.log(`stderr: ${stderr}`);
return;
}
console.log(`stdout: ${stdout}`);
});
... but didnt work either and looks needless.
The #gltf-transform/cli package you're importing isn't really designed for programmatic usage; it's just a CLI. Code meant for programmatic use is in the /functions, /core, and /extensions packages.
If you'd like to implement something similar in JavaScript, I would try this instead:
import { Document, NodeIO } from '#gltf-transform/core';
const io = new NodeIO();
const inputDocument1 = io.read('./model_path/fileA.gltf');
const inputDocument2 = io.read('./model_path/fileB.gltf');
const outputDocument = new Document()
.merge(inputDocument1)
.merge(inputDocument2);
// Optional: Merge binary resources to a single buffer.
const buffer = doc.getRoot().listBuffers()[0];
doc.getRoot().listAccessors().forEach((a) => a.setBuffer(buffer));
doc.getRoot().listBuffers().forEach((b, index) => index > 0 ? b.dispose() : null);
io.write('./model_path/output.gltf', outputDocument);
This will create a single glTF file containing two scenes. If you wanted to merge the contents of both files into a single scene that would require moving some nodes around with the Scene API.
you can use gltf-transform/cli by using spawn.
// you should run `node #gltf-transform/cli/bin/cli.js`
const gltf_transform_cli_path = `your project's node_modules dir path${path.sep}#gltf-transform${path.sep}cli${path.sep}bin${path.sep}cli.js`
const normPath = path.normalize(gltf_transform_cli_path);
const command = `node ${normPath} merge ${fileA} ${fileB} output.gltf false`
const result = spawn(command)
and plus,
if you're going to use texture compressing using ktx(etc1s, usatc),
you've got to install ktx on your local.
I am creating an application with Node.js and I am trying to read a file called "datalog.txt." I use the "append" function to write to the file:
//Appends buffer data to a given file
function append(filename, buffer) {
let fd = fs.openSync(filename, 'a+');
fs.writeSync(fd, str2ab(buffer));
fs.closeSync(fd);
}
//Converts string to buffer
function str2ab(str) {
var buf = new ArrayBuffer(str.length*2); // 2 bytes for each char
var bufView = new Uint16Array(buf);
for (var i=0, strLen=str.length; i < strLen; i++) {
bufView[i] = str.charCodeAt(i);
}
return buf;
}
append("datalog.txt","12345");
This seems to work great. However, now I want to use fs.readFileSync to read from the file. I tried using this:
const data = fs.readFileSync('datalog.txt', 'utf16le');
I changed the encoding parameter to all of the encoding types listed in the Node documentation, but all of them resulted in this error:
TypeError: Argument at index 2 is invalid: Invalid encoding
All I want to be able to do is be able to read the data from "datalog.txt." Any help would be greatly appreciated!
NOTE: Once I can read the data of the file, I want to be able to get a list of all the lines of the file.
Encoding and type are an object:
const data = fs.readFileSync('datalog.txt', {encoding:'utf16le'});
Okay, after a few hours of troubleshooting a looking at the docs I figured out a way to do this.
try {
// get metadata on the file (we need the file size)
let fileData = fs.statSync("datalog.txt");
// create ArrayBuffer to hold the file contents
let dataBuffer = new ArrayBuffer(fileData["size"]);
// read the contents of the file into the ArrayBuffer
fs.readSync(fs.openSync("datalog.txt", 'r'), dataBuffer, 0, fileData["size"], 0);
// convert the ArrayBuffer into a string
let data = String.fromCharCode.apply(null, new Uint16Array(dataBuffer));
// split the contents into lines
let dataLines = data.split(/\r?\n/);
// print out each line
dataLines.forEach((line) => {
console.log(line);
});
} catch (err) {
console.error(err);
}
Hope it helps someone else with the same problem!
This works for me:
index.js
const fs = require('fs');
// Write
fs.writeFileSync('./customfile.txt', 'Content_For_Writing');
// Read
const file_content = fs.readFileSync('./customfile.txt', {encoding:'utf8'}).toString();
console.log(file_content);
node index.js
Output:
Content_For_Writing
Process finished with exit code 0
I need to convert a docx file to pdf but I don't know very well nodejs, however, I know that the following can be done:
There is a project called unoconv-worker and in it, there is a part where the following line appears:
var child = spawn ('unoconv', [
'--stdout',
'--no-launch',
'--format', job.outputExtension,
job.tempPath
]);
https://github.com/koumoul-dev/unoconv-worker/blob/master/route.js
In my terminal I can convert it in the following way and it works perfectly:
unoconv -f pdf --output="something.pdf" docxtoconvert.docx
However, I would like to give you a file that I gave you the route, so I tried it this way:
var filePath = "/tmp/docxtoconvert.docx";
var child = spawn ("unoconv", [
"-f",
"pdf",
"--output",
"/tmp/something.pdf",
filePath
]);
Output:
Unoconv converter received message on stderr function () {
if (arguments.length === 0) {
var result = this.utf8Slice(0, this.length);
} else {
var result = slowToString.apply(this, arguments);
}
if (result === undefined)
throw new Error('toString failed');
return result;
}
But it has not worked. Could you help me? Thank you
Lot of wrapper modules exists for unoconv that can solve your problem.
You can try this
https://www.npmjs.com/package/unoconv
I'm trying to make an extension that uses chrome native messaging to communicate with youtube-dl using a node.js host script. I've been able to successfully parse the stdin from the extension & also been able to run a child process (i.e. touch file.dat), but when I try to exec/spawn youtube-dl it hangs on the command. I've tried the host script independently of chrome native input and it works fine. I think the problem may have something to do with 1MB limitations on buffer size of chrome native messaging. Is there a way around reading the buffer?
#! /usr/bin/env node
"use strict";
const fs = require('fs');
const exec = require('child_process').execSync;
const dlPath = '/home/toughluck/Music';
let first = true;
let buffers = [];
process.stdin.on('readable', () => {
let chunk = process.stdin.read();
if (chunk !== null) {
if (first) {
chunk = chunk.slice(4);
first = false;
}
buffers.push(chunk);
}
});
process.stdin.on('end', () => {
const res = Buffer.concat(buffers);
const url = JSON.parse(res).url;
const outTemplate = `${dlPath}/%(title)s.%(ext)s`;
const cmdOptions = {
shell: '/bin/bash'
};
const cmd = `youtube-dl --extract-audio --audio-format mp3 -o \"${outTemplate}\" ${url}`;
// const args = ['--extract-audio', '--audio-format', 'mp3', '-o', outTemplate, url];
// const cmd2 = 'youtube-dl';
process.stderr.write('Suck it chrome');
process.stderr.write('stderr doesnt stop host');
exec(cmd, cmdOptions, (err, stdout, stderr) => {
if (err) throw err;
process.stderr.write(stdout);
process.stderr.write(stderr);
});
process.stderr.write('\n Okay....');
});
The full codebase can be found at https://github.com/wrleskovec/chrome-youtube-mp3-dl
So I was right about what was causing the problem. It had to do with 1 MB limitation on host to chrome message. You can avoid this by redirecting the stdout/stderr to a file.
const cmd = `youtube-dl --extract-audio --audio-format mp3 -o \"${outTemplate}\" ${url} &> d.txt`;
This worked for me. To be honest I'm not entirely why the message is considered > 1 MB and if someone can give a better explanation that would be great.