Pass in buffer when library expects a file path node - node.js

I have a library that expects a filepath in order to load the data. However, I have the contents of the file in the form of a buffer instead. How do I make the buffer pretend to be a filepath?

So you're saying you have a Buffer that's holding the route to a file. You want to convert the Buffer into a string. If that's what you're trying to do then, here you go:
var buff = new Buffer('/path/to/wonder/land.js');
buff.toString('utf8');
Hope that helps.

Related

Converting a nodejs buffer to string and back to buffer gives a different result in some cases

I created a .docx file.
Now, I do this:
// read the file to a buffer
const data = await fs.promises.readFile('<pathToMy.docx>')
// Converts the buffer to a string using 'utf8' but we could use any encoding
const stringContent = data.toString()
// Converts the string back to a buffer using the same encoding
const newData = Buffer.from(stringContent)
// We expect the values to be equal...
console.log(data.equals(newData)) // -> false
I don't understand in what step of the process the bytes are being changed...
I already spent sooo much time trying to figure this out, without any result... If someone can help me understand what part I'm missing out, it would be really awesome!
A .docXfile is not a UTF-8 string (it's a binary ZIP file) so when you read it into a Buffer object and then call .toString() on it, you're assuming it is already encoding as UTF-8 in the buffer and you want to now move it into a Javascript string. That's not what you have. Your binary data will likely encounter things that are invalid in UTF-8 and those will be discarded or coerced into valid UTF-8, causing an irreversible change.
What Buffer.toString() does is take a Buffer that is ALREADY encoded in UTF-8 and puts it into a Javascript string. See this comment in the doc,
If encoding is 'utf8' and a byte sequence in the input is not valid UTF-8, then each invalid byte is replaced with the replacement character U+FFFD.
So, the code you show in your question is wrongly assuming that Buffer.toString() takes binary data and reversibly encodes it as a UTF8 string. That is not what it does and that's why it doesn't do what you are expecting.
Your question doesn't describe what you're actually trying to accomplish. If you want to do something useful with the .docX file, you probably need to actually parse it from it's binary ZIP file form into the actual components of the file in their appropriate format.
Now that you explain you're trying to store it in localStorage, then you need to encode the binary into a string format. One such popular option is Base64 though it isn't super efficient (size wise), but it is better than many others. See Binary Data in JSON String. Something better than Base64 for prior discussion on this topic. Ignore the notes about compression in that other answer because your data is already ZIP compressed.

adm-zip doesn't compress data

I'm trying to use adm-zip to add files from memory to a zip file also in memory. It seems that the zip file is created correctly (the result of saving zipData can be unzipped in Windows), but the compression ratio is always zero.
This is a model of the code that I expected to work but doesn't. As can be seen from the output, "compressedData" is null and "size" and "compressedSize" are the same whatever value is passed as the file content.
var admzip = require("adm-zip")
var zip = new admzip();
zip.addFile("tmp.txt", "aaaaaaaaaaaaaaaaaaaa");
var zipData = zip.toBuffer();
console.log(zip.getEntries()[0].toString());
https://runkit.com/embed/pn5kaiir12b0
How do I get it to compress the files as well as just zipping?
This is an old question but to anyone who is also experiencing this issue, the reason is that the adm-zip does not compress the data until the compressedData field is accessed for the first time.
Quote from the docs
[Buffer] Buffer compressedData When setting compressedData, the LOC
Data Header must also be present at the beginning of the Buffer. If
the compressedData was set for a ZipEntry anf no other change was made
to its properties (comment, extra etc), reading this property will
return the same value. If changes had been made, reading this property
will recompress the data and recreate the entry headers.
If no compressedData was specified, reading this property will
compress the data and create the required headers.
The output of the compressedData Buffer contains the LOC Data Header

Nodejs fs.readfile vs new buffer binary

I have a situation where I receive a base64 encoded image, decode it, then want to use it in some analysis activity.
I can use Buffer to go from base64 to binary but i seem to be unable to use that output as expected (as an image).
The solution now is to convert to binary, write it to a file, then read that file again. The FS output can be used as an image but this approach seems a bit inefficient and additional steps as i would expect the buffer output to also be a usable image as it has the same data?
my question, is how does the fs.readfile output differ from the buffer output? And is there a way i can use the buffer output as i would the fs output?
Buffer from a base64 string:
var bin = new Buffer(base64String, 'base64').toString('binary');
Read a file
var bin = fs.readFileSync('image.jpg');
Many thanks

Graphicsmagick for Node not writing to the correct file when converting PDF

I'm creating a thumbnail from the first page of a PDF with the Node gm module.
var fs = require('fs');
var gm = require('gm');
var writeStream = fs.createWriteStream("cover.jpg");
// Create JPG from page 0 of the PDF
gm("file.pdf[0]").setFormat("jpg").write(writeStream, function(error){
if (!error) {
console.log("Finished saving JPG");
}
});
There's two problems with the script.
It creates a file cover.jpg, but that file is empty (size 0) and can't be opened by any viewer.
It creates a file named [object Object] that is an image of the PDF's first page (this is what I want, but the wrong name).
Aside from doing some additional file system manipulation to rename the [object Object] file after generating it, is there something I can change in the way I am using gm and fs in this script to write the image directly to the cover.jpg file?
This question is similar to what I am asking, but there is no accepted working answer and I need to install yet another library to use it (undesirable).
write receives the file path as the first argument, not a write stream, therefore the method is converting the stream object into its string representation, that's why it saves a file named [object Object].
You can just use .write("cover.jpg"), or if you want to use a write stream, you may use .stream().pipe(writeStream).
Take a look at the stream examples of gm.

Node.js Buffer and Encoding

I have an HTTP endpoint where user uploads file. I need to read the file contents and then store it to DB. I can read it to Buffer and getting string from it.
The problem is, then file content is not UTF-8 I can see "strange" symbols in output string.
Is that possible somehow to detect the encoding of Buffer contents and serialise it to string correctly?

Resources