And on the contrary, how can I convert the binaries data back to image? Because the image data save in the backend are stored as binaries.
Try this .
var fs = require("fs");
fs.readFile('image.jpg', function(err, data) {
if (err) throw err;
// Encode to base64
var encodedImage = new Buffer(data, 'binary').toString('base64');
// Decode from base64
var decodedImage = new Buffer(encodedImage, 'base64').toString('binary');
});
Hope it will be useful for you.
You can do it by using fs.createReadStream instead of Buffer, Buffer is deprecated method.
Find more info about the differences in https://medium.com/tensult/stream-and-buffer-concepts-in-node-js-87d565e151a0
If you want a solution for reading files(obviously you can read images too) and get it converted to binary, I wrote a small code in NodeJS, have a look hope it will help you out. It is all about reading a file into binary, but surely you can convert the string to array or byte-array. If you get suck here, please let me know in the comments below.
Here is a simple yet robust snippet you can try.
params format:
getBinary({
path : '<file_relative_path>',
padlength: '<prepending_padding_length>', (Default: 4)
debug: false, (Default: true)
limit: 10 (Default: Full_File_Length)
putSpacing: Boolean (Default: false)
})
Params Description:
1. path: Specifies the relative file path, to be read.
2. padlength: After reading the file, it reads object as number
(ex: hex(f): 1111, hex(0): 0), so if you need a
uniform length binary string then you will need to
fill the strings. as hex(0): 0000 when padlength is 4.
3. limit: limits the read buffer to render.
4. putSpacing: if true it puts a space after each padlength.
or
getBinary('<file_relative_path>');
Get it here: https://computopedia.com/how-to-convert-image-to-binary-nodejs/
Gist: https://gist.github.com/shankha96/cffe620776066078289ea1f8b15956e0
Related
I need a way to use Node.js to convert a photo from HEIC format to either jpg or png. I have searched and cannot seem to find anything that works.
npm -i heic-convert
const convert = require('heic-convert');
async function heicToJpg(file, output) {
console.log(file, output);
const inputBuffer = await promisify(fs.readFile)(file);
const outputBuffer = convert({
buffer: inputBuffer, // the HEIC file buffer
format: 'PNG', // output format
});
return promisify(fs.writeFile)(output, outputBuffer);
}
With heic-convert as Bruno suggested, it works fine.
Here is a node utility that allows you to serially convert HEIC files present in a folder: convert-heic-files
Changing the filename is sufficient for viewing HEIC as jpg:
const fileName = photo.fileName.split(".")[0] + ".jpg";
I have a very large, binary file (>25 GB), and I need to very quickly read a small range of bytes from it at a specific offset. How can I accomplish this in Node.js in an efficient way?
A fairly minimal example of what you want, refer to https://nodejs.org/api/all.html#fs_fs_createreadstream_path_options for more details
const fs = require("fs");
const stream = fs.createReadStream("test.txt", { start: 1, end: 5 });
stream.on("data", chunk => console.log(chunk.toString()));
Provided you have a file called test.txt of course...
Is there a way to tell require that if file name ends with .jpg then it should return base64 encoded version of it?
var image = require('./logo.jpg');
console.log(image); // data:image/jpg;base64,/9j/4AAQSkZJRgABAgA...
I worry about the "why", but here is "how":
var Module = require('module');
var fs = require('fs');
Module._extensions['.jpg'] = function(module, fn) {
var base64 = fs.readFileSync(fn).toString('base64');
module._compile('module.exports="data:image/jpg;base64,' + base64 + '"', fn);
};
var image = require('./logo.jpg');
There's some serious issues with this mechanism: for one, the data for each image that you load this way will be kept in memory until your app stops (so it's not useful for loading lots of images), and because of that caching mechanism (which also applies to regular use of require()), you can only load an image into the cache once (requiring an image a second time, after its file has changed, will still yield the first—cached—version, unless you manually start cleaning the module cache).
In other words: you don't really want this.
You can use fs.createReadStream("/path/to/file")
I'm using a Latin1 encoded DB and can't change it to UTF-8 meaning that I run into issues with certain application data. I'm using Tesseract to OCR a document (tesseract encodes in UTF-8) and tried to use iconv-lite; however, it creates a buffer and to convert that buffer into a string. But again, buffer to string conversion does not allow "latin1" encoding.
I've read a bunch of questions/answers; however, all I get is setting client encoding and stuff like that.
Any ideas?
Since Node.js v7.1.0, you can use the transcode function from the buffer module:
https://nodejs.org/api/buffer.html#buffer_buffer_transcode_source_fromenc_toenc
For example:
const buffer = require('buffer');
const latin1Buffer = buffer.transcode(Buffer.from(utf8String), "utf8", "latin1");
const latin1String = latin1Buffer.toString("latin1");
You can create a buffer from the UFT8 string you have, and then decode that buffer to Latin 1 using iconv-lite, like this
var buff = new Buffer(tesseract_string, 'utf8');
var DB_str = iconv.decode(buff, 'ISO-8859-1');
I've found a way to convert any encoded text file, to UTF8
var
fs = require('fs'),
charsetDetector = require('node-icu-charset-detector'),
iconvlite = require('iconv-lite');
/* Having different encodings
* on text files in a git repo
* but need to serve always on
* standard 'utf-8'
*/
function getFileContentsInUTF8(file_path) {
var content = fs.readFileSync(file_path);
var original_charset = charsetDetector.detectCharset(content);
var jsString = iconvlite.decode(content, original_charset.toString());
return jsString;
}
I'ts also in a gist here: https://gist.github.com/jacargentina/be454c13fa19003cf9f48175e82304d5
Maybe you can try this, where content should be your database buffer data (in latin1 encoding)
I have a relatively small file (some hundreds of kilobytes) that I want to be in memory for direct access for the entire execution of the code.
I don't know exactly the internals of Node.js, so I'm asking if a fs open is enough or I have to read all file and copy to a Buffer?
Basically, you need to use the readFile or readFileSync function from the fs module. They return the complete content of the given file, but differ in their behavior (asynchronous versus synchronous).
If blocking Node.js (e.g. on startup of your application) is not an issue, you can go with the synchronized version, which is as easy as:
var fs = require('fs');
var data = fs.readFileSync('/etc/passwd');
If you need to go asynchronous, the code is like that:
var fs = require('fs');
fs.readFile('/etc/passwd', function (err, data ) {
// ...
});
Please note that in either case you can give an options object as the second parameter, e.g. to specify the encoding to use. If you omit the encoding, the raw buffer is returned:
var fs = require('fs');
fs.readFile('/etc/passwd', { encoding: 'utf8' }, function (err, data ) {
// ...
});
Valid encodings are utf8, ascii, utf16le, ucs2, base64 and hex. There is also a binary encoding, but it is deprecated and should not be used any longer. You can find more details on how to deal with encodings and buffers in the appropriate documentation.
As easy as
var buffer = fs.readFileSync(filename);
With Node 0.12, it's possible to do this synchronously now:
var fs = require('fs');
var path = require('path');
// Buffer mydata
var BUFFER = bufferFile('../public/mydata');
function bufferFile(relPath) {
return fs.readFileSync(path.join(__dirname, relPath)); // zzzz....
}
fs is the file system. readFileSync() returns a Buffer, or string if you ask.
fs correctly assumes relative paths are a security issue. path is a work-around.
To load as a string, specify the encoding:
return readFileSync(path,{ encoding: 'utf8' });