NodeJS require('./path/to/image/image.jpg') as base64 - node.js

Is there a way to tell require that if file name ends with .jpg then it should return base64 encoded version of it?
var image = require('./logo.jpg');
console.log(image); // data:image/jpg;base64,/9j/4AAQSkZJRgABAgA...

I worry about the "why", but here is "how":
var Module = require('module');
var fs = require('fs');
Module._extensions['.jpg'] = function(module, fn) {
var base64 = fs.readFileSync(fn).toString('base64');
module._compile('module.exports="data:image/jpg;base64,' + base64 + '"', fn);
};
var image = require('./logo.jpg');
There's some serious issues with this mechanism: for one, the data for each image that you load this way will be kept in memory until your app stops (so it's not useful for loading lots of images), and because of that caching mechanism (which also applies to regular use of require()), you can only load an image into the cache once (requiring an image a second time, after its file has changed, will still yield the first—cached—version, unless you manually start cleaning the module cache).
In other words: you don't really want this.

You can use fs.createReadStream("/path/to/file")

Related

How to convert a node gd image to a stream that I can pipe?

I'm using node-gd to process images, but I'd like to do a few things before saving them to the disk. Right now I save the file with the .savePng() and .saveJpeg() functions.
I'd like to convert it to a stream which can be piped to an FS stream.
I tried the module streamifier because it sounds like it would do what I need, but when running the code below, the exported image is unreadable (though the same size as exporting via node-gd).
Here is what I attempted to do:
var gd = require("node-gd");
var fs = require("fs");
const streamifier = require('streamifier');
var inputImage = gd.createFromPng('input.png');
var writeStream = fs.createWriteStream('output.png');
var pngstream = inputImage.pngPtr();
streamifier.createReadStream(pngstream).pipe(writeStream);
Is there something I'm missing?
The png pointer needs to first be converted to a buffer like so
var pngstream = Buffer.from(inputImage.pngPtr(), 'binary');

Writing long strings to file (node js)

I have a string which is 169 million chars long, which I need to write to a file and then read from another process.
I have read about WriteStream and ReadStream, but how do I write the string to a file when it has no method 'pipe'?
Create a write stream is a good idea. You can use it like this:
var fs = require('fs');
var wstream = fs.createWriteStream('myOutput.txt');
wstream.write('Hello world!\n');
wstream.write('Another line\n');
wstream.end();
You can call to write as many time as you need, with parts of that 16 million chars string. Once you have finished writing the file, you can create a read stream to read chunks of the file.
However, 16 million chars are not that much, I would say you could read and write it at once and keep in memory the whole file.
Update: As requested in comment, I update with an example to pipe the stream to zip on-the-fly:
var zlib = require('zlib');
var gzip = zlib.createGzip();
var fs = require('fs');
var out = fs.createWriteStream('input.txt.gz');
gzip.pipe(out);
gzip.write('Hello world!\n');
gzip.write('Another line\n');
gzip.end();
This will create a gz file, and inside, only one file with same name (without the .gz at the end).
This might solve your problem
var fs = require('fs');
var request = require('request');
var stream = request('http://i.imgur.com/dmetFjf.jpg');
var writeStream = fs.createWriteStream('./testimg.jpg');
stream.pipe(writeStream);
Follow the link for more details
http://neethack.com/2013/12/understand-node-stream-what-i-learned-when-fixing-aws-sdk-bug/
If you're looking to write what's called a blocking process, eg something that will prevent you from doing something else, approaching that process asynchronously is the best solution (and why node.js is good at solving these types of problems). With that said, avoid methods that have fs.*Sync as that will be a synchronous method. fs.writeFile is what I believe you're looking for. Read the Docs

How does this npm build work?

https://github.com/apigee-127/swagger-converter
I see this code:
var convert = require('swagger-converter');
var fs = require('fs');
var resourceListing = JSON.parse(fs.readFileSync('/path/to/petstore/index.json').toString());
var apiDeclarations = [ JSON.parse(fs.readFileSync('/path/to/petstore/pet.json').toString()),
JSON.parse(fs.readFileSync('/path/to/petstore/user.json').toString()),
JSON.parse(fs.readFileSync('/path/to/petstore/store.json').toString())
];
var swagger2Document = convert(resourceListing, apiDeclarations);
console.log(JSON.stringify(swagger2Document, null, 2));
I'm confsued as to what exactly I'm supposed to do here to run this? Do I start a node http server?
To run the file you pasted, just save the code into a file like script.js. Then from the command line (with node installed) run node script.js. That will run the file. Here's a breakdown of what it's doing:
var convert = require('swagger-converter');
This line gets reference to the swagger-converter module that you linked to. That module is designed to allow you to convert swagger documents into JSON.
var fs = require('fs');
This line gets reference to the node built-in filesystem module (fs for short). It provides an API for interacting with the filesystem on your machine when the script is running.
var resourceListing = JSON.parse(fs.readFileSync('/path/to/petstore/index.json').toString());
This line could be broken down to:
var indexContent = fs.readFileSync('/path/to/petstore/index.json');
JSON.parse(indexContent.toString());
readFileSync returns the contents of the index.json file as a buffer object, which is easily turned into a simple string with the call to .toString(). Then they pass it to JSON.parse which parses the string and turns it into a simple JavaScript object.
Fun Fact: They could have skipped those steps with a simple var resourceListing = require('/path/to/petstore/index.json');. Node knows how to read JSON files and automatically turn them into JavaScript objects. You need only pass the path to require.
var apiDeclarations = [ JSON.parse(fs.readFileSync('/path/to/petstore/pet.json').toString()),
JSON.parse(fs.readFileSync('/path/to/petstore/user.json').toString()),
JSON.parse(fs.readFileSync('/path/to/petstore/store.json').toString())
];
This bit of code does the same thing as the resourceListing except it creates an array of three JavaScript objects based on those JSON files. They also could have used require here to save a bit of work.
Then finally they use the converter to do the conversion and then they log that data to the terminal where your script is running.
var swagger2Document = convert(resourceListing, apiDeclarations);
console.log(JSON.stringify(swagger2Document, null, 2));
JSON.stringify is the opposite of JSON.parse. stringify turns a JavaScript object into a JSON string whereas parse turns a JSON string into a JavaScript object.

Best practice on avoid duplicated requires in nodejs

I have multiple js files, all have the same requires in the beginning like
var config = require("config");
var expect = require("chai").expect;
var commonAssertions = require('../../../utils/common_assertions.js');
var commonSteps = require('../../../utils/common_steps.js');
I am thinking about putting all of them in one file and just require this single file.
I am wondering if there is any best practice or convention on this in nodejs.
Remember that require() must always return a Javascript object, module.exports.
So if you were to extract this to a different file, that would be perfectly fine.
includes.js
exports.config = require("config");
exports.chai = require("chai").expect;
exports.commonAssertions = require('../../../utils/common_assertions.js');
exports.commonSteps = require('../../../utils/common_steps.js');
myfile.js
includes = require('./includes')
includes.expect(true).to.be.true //For example
It is not necessarily a good or bad practice. I would say that if you expect to need the exact same modules from many different files, then go for it.

Buffer entire file in memory with Node.js

I have a relatively small file (some hundreds of kilobytes) that I want to be in memory for direct access for the entire execution of the code.
I don't know exactly the internals of Node.js, so I'm asking if a fs open is enough or I have to read all file and copy to a Buffer?
Basically, you need to use the readFile or readFileSync function from the fs module. They return the complete content of the given file, but differ in their behavior (asynchronous versus synchronous).
If blocking Node.js (e.g. on startup of your application) is not an issue, you can go with the synchronized version, which is as easy as:
var fs = require('fs');
var data = fs.readFileSync('/etc/passwd');
If you need to go asynchronous, the code is like that:
var fs = require('fs');
fs.readFile('/etc/passwd', function (err, data ) {
// ...
});
Please note that in either case you can give an options object as the second parameter, e.g. to specify the encoding to use. If you omit the encoding, the raw buffer is returned:
var fs = require('fs');
fs.readFile('/etc/passwd', { encoding: 'utf8' }, function (err, data ) {
// ...
});
Valid encodings are utf8, ascii, utf16le, ucs2, base64 and hex. There is also a binary encoding, but it is deprecated and should not be used any longer. You can find more details on how to deal with encodings and buffers in the appropriate documentation.
As easy as
var buffer = fs.readFileSync(filename);
With Node 0.12, it's possible to do this synchronously now:
var fs = require('fs');
var path = require('path');
// Buffer mydata
var BUFFER = bufferFile('../public/mydata');
function bufferFile(relPath) {
return fs.readFileSync(path.join(__dirname, relPath)); // zzzz....
}
fs is the file system. readFileSync() returns a Buffer, or string if you ask.
fs correctly assumes relative paths are a security issue. path is a work-around.
To load as a string, specify the encoding:
return readFileSync(path,{ encoding: 'utf8' });

Resources