Dynamically exporting functions in firebase - node.js

I'm having the typical (according to many posts) issue with cold boot times in cloud functions. A solution that seemed promised suggests to import / export only the function actually being executed, as can be seen here:
https://github.com/firebase/functions-samples/issues/170#issuecomment-323375462
if (!process.env.FUNCTION_NAME || process.env.FUNCTION_NAME === 'sendFollowerNotification') {
exports.sendFollowerNotification = require('./sendFollowerNotification');
}
Which is a Javascript example, but I'm using typescript. I've tried a number of variations, and while some build, in the end I'm always stuck with my function not being exported and deploy warning me that I'm going to delete the existing function.
This is one of the numerous attempts:
if (!process.env.FUNCTION_NAME || process.env.FUNCTION_NAME === 'generateInviteURL') {
import ('./invite_functions').then ((mod) => { console.log ("mod follows" ); console.log (mod); exports.generateInviteURL = functions.https.onRequest( mod.generateInviteURL ); } )
.catch ((err) => {console.log ("Trying to import/export generateInviteURL ", err);}) ;
}
At mentioned, at deploy time what happens is that I get a warning about the function being deleted.
I was able to "avoid" that message with something like this:
console.log ("Function name: ", process.env.FUNCTION_NAME);
function dummy_generateInviteURL (req, res) { ; }
exports.generateInviteURL = functions.https.onRequest( dummy_generateInviteURL );
if (!process.env.FUNCTION_NAME || process.env.FUNCTION_NAME === 'generateInviteURL') {
console.log ("Doing the good import");
import ('./invite_functions').then ((mod) => { console.log ("mod follows" ); console.log (mod); exports.generateInviteURL = functions.https.onRequest( mod.generateInviteURL ); } )
.catch ((err) => {console.log ("Trying to import/export generateInviteURL ", err);}) ;
}
console.log ("Exported");
console.log (exports.generateInviteURL);
Which the idea of course that an empty function would be always be exported but would be replaced with the real one if that's the one being called.
In that case logs look like this:
generateInviteURL Function name: generateInviteURL generateInviteURL
generateInviteURL Exported generateInviteURL
{ [Function: cloudFunction] __trigger: { httpsTrigger: {} } }
So the first part looks promising (the environment variable is defined), then the import does something (enters the then block, never the catch), but the exported variable is not replaced.
I'm not sure if this is a TypeScript problem, a firebase problem, or a developer problem - probably I'm just missing something obvious.
So the question - how can I avoid importing/exporting anything I don't need for each specific function?

You can stick to the original index.js, with minor changes. I have tried a few times and came with a solution. Am including a sample dir structure and two typescript files (index.ts and another for your custom function). With this you will never have to change the index.ts to modify or add functions.
Directory Structure
+ functions
|
-- + src
| |
| -- index.ts
| |
| -- + get
| |
| -- status.f.ts
-- package.json (auto generated)
|
-- package-lock.json (auto generated)
|
-- tsconfig.json (auto generated)
|
-- tslint.json (auto generated)
|
-- + lib (auto generated)
|
-- + node_modules (auto generated)
src/index.ts
import * as glob from "glob";
import * as camelCase from "camelcase";
const files = glob.sync('./**/*.f.js', { cwd: __dirname, ignore: './node_modules/**'});
for(let f=0,fl=files.length; f<fl; f++){
const file = files[f];
const functionName = camelCase(file.slice(0, -5).split('/').join('_')); // Strip off '.f.js'
if (!process.env.FUNCTION_NAME || process.env.FUNCTION_NAME === functionName) {
exports[functionName] = require(file);
}
}
src/get/status.f.ts
import * as functions from 'firebase-functions';
exports = module.exports = functions.https.onRequest((req, res) => {
res.status(200).json({'status':'OK'})
})
Once you have created the above file install npm packages 'glob' and 'camelcase',
then try to deploy the firebase functions
Firebase will deploy a function named 'getStatus'
NOTE that the name of the function is camel case version of folder names and the file name where the function exists, so you can export only one function per .f.ts file
EDIT
I have updated the dir structure. Note that index.ts and all the subsequent files and folders resides within the parent folder 'src'

I've been having similar issues, and I wrote a pretty cool solution that worked for me.
I decided to release it as an open-source package - I've never done that before so if anyone can check it out, contribute or give feedback that would be great.
The package is called better-firebase-functions - https://www.npmjs.com/package/better-firebase-functions
All you have to do is import and run it. Two lines. Pretty much everything else is automated.
It will pick up the default exports of all function files in your directory and deploy them as properly named functions.
Here is an example:
import { exportFunctions } from 'better-firebase-functions';
exportFunctions({
__filename, // standard node var (leave as is).
exports, // standard node var (leave as is).
functionDirectoryPath: './myFuncs', // define root functions folder
// relative to this file.
searchGlob: '**/*.js' // file search glob pattern.
});
The only other thing you need to do is export your function.http.onRequest()... as a default export from every file that contains a cloud function that you want to deploy. So:
export default functions.http.onRequest()...
/// OR
const func1 = functions.http.onRequest()...
export default func1;
And that's it!
UPDATE: Answer edited to reflect newer version of package.

Related

How to export everything from every file in a directory?

Usually, I create an index.js file that exports everything from every file in a specific directory. like this:
export * from "./file1"
export * from "./file2"
export * from "./file3"
This is a common pattern in all of my projects. The downside of this approach is that whenever I create a new file in a directory I have to change its corresponding index.js file and export the new file. Now I'm trying to automate it and write a script that exports everything in every file in a directory.
after some googling, I came up with this:
import fs from "fs";
export let exp = {};
fs.readdirSync("./").forEach(function (file) {
if (file.indexOf(".js") > -1 && file != "index.js") {
const imp = require(`./` + file);
exp = { ...exp, ...imp };
}
});
But it doesn't work, maybe because I used the require function and that doesn't work with ES modules.
Also, I can't write something like this because I'm not allowed to use export in a function block.
export let exp = {};
fs.readdirSync("./").forEach(function (file) {
if (file.indexOf(".ts") > -1 && file != "index.ts") {
export * from ("./"+file);
}
});
So I stock here. do you have any idea about this?
I suggest using a package like glob. The glob package allows you to get all files inside a directory including those hidden within a subfolder.
Example usage:
const glob = require("glob");
glob("**/*", function (err, files) {
// files is an array of filenames
// err is an error object or null
});

Discard and re-import dynamic import [duplicate]

From the node.js documentation:
Modules are cached after the first time they are loaded. This means (among other things) that every call to require('foo') will get exactly the same object returned, if it would resolve to the same file.
Is there a way to invalidate this cache? i.e. for unit testing, I'd like each test to be working on a fresh object.
You can always safely delete an entry in require.cache without a problem, even when there are circular dependencies. Because when you delete, you just delete a reference to the cached module object, not the module object itself, the module object will not be GCed because in case of circular dependencies, there is still a object referencing this module object.
Suppose you have:
script a.js:
var b=require('./b.js').b;
exports.a='a from a.js';
exports.b=b;
and script b.js:
var a=require('./a.js').a;
exports.b='b from b.js';
exports.a=a;
when you do:
var a=require('./a.js')
var b=require('./b.js')
you will get:
> a
{ a: 'a from a.js', b: 'b from b.js' }
> b
{ b: 'b from b.js', a: undefined }
now if you edit your b.js:
var a=require('./a.js').a;
exports.b='b from b.js. changed value';
exports.a=a;
and do:
delete require.cache[require.resolve('./b.js')]
b=require('./b.js')
you will get:
> a
{ a: 'a from a.js', b: 'b from b.js' }
> b
{ b: 'b from b.js. changed value',
a: 'a from a.js' }
===
The above is valid if directly running node.js. However, if using tools that have their own module caching system, such as jest, the correct statement would be:
jest.resetModules();
If you always want to reload your module, you could add this function:
function requireUncached(module) {
delete require.cache[require.resolve(module)];
return require(module);
}
and then use requireUncached('./myModule') instead of require.
Yes, you can access the cache via require.cache[moduleName] where moduleName is the name of the module you wish to access. Deleting an entry by calling delete require.cache[moduleName] will cause require to load the actual file.
This is how you would remove all cached files associated with the module:
/**
* Removes a module from the cache
*/
function purgeCache(moduleName) {
// Traverse the cache looking for the files
// loaded by the specified module name
searchCache(moduleName, function (mod) {
delete require.cache[mod.id];
});
// Remove cached paths to the module.
// Thanks to #bentael for pointing this out.
Object.keys(module.constructor._pathCache).forEach(function(cacheKey) {
if (cacheKey.indexOf(moduleName)>0) {
delete module.constructor._pathCache[cacheKey];
}
});
};
/**
* Traverses the cache to search for all the cached
* files of the specified module name
*/
function searchCache(moduleName, callback) {
// Resolve the module identified by the specified name
var mod = require.resolve(moduleName);
// Check if the module has been resolved and found within
// the cache
if (mod && ((mod = require.cache[mod]) !== undefined)) {
// Recursively go over the results
(function traverse(mod) {
// Go over each of the module's children and
// traverse them
mod.children.forEach(function (child) {
traverse(child);
});
// Call the specified callback providing the
// found cached module
callback(mod);
}(mod));
}
};
Usage would be:
// Load the package
var mypackage = require('./mypackage');
// Purge the package from cache
purgeCache('./mypackage');
Since this code uses the same resolver require does, just specify whatever you would for require.
"Unix was not designed to stop its users from doing stupid things, as
that would also stop them from doing clever things." – Doug Gwyn
I think that there should have been a way for performing an explicit uncached module loading.
There's a Simple Module for that (with tests)
We had this exact issue while testing our code (delete cached modules so they can be re-required in a fresh state) so we reviewed all the suggestions of people on the various StackOverflow Questions & Answers and put together a simple node.js module (with tests):
https://www.npmjs.com/package/decache
As you would expect, works for both published npm packages and locally defined modules. Windows, Mac, Linux, etc.
How? (usage)
Usage is pretty simple:
install
Install the module from npm:
npm install decache --save-dev
Use it in your code:
// require the decache module:
const decache = require('decache');
// require a module that you wrote"
let mymod = require('./mymodule.js');
// use your module the way you need to:
console.log(mymod.count()); // 0 (the initial state for our counter is zero)
console.log(mymod.incrementRunCount()); // 1
// delete the cached module:
decache('./mymodule.js');
//
mymod = require('./mymodule.js'); // fresh start
console.log(mymod.count()); // 0 (back to initial state ... zero)
If you have any questions or need more examples, please create a GitHub issue:
https://github.com/dwyl/decache/issues
For anyone coming across this who is using Jest, because Jest does its own module caching, there's a built-in function for this - just make sure jest.resetModules runs eg. after each of your tests:
afterEach( function() {
jest.resetModules();
});
Found this after trying to use decache like another answer suggested. Thanks to Anthony Garvan.
Function documentation here.
The solutions is to use:
delete require.cache[require.resolve(<path of your script>)]
Find here some basic explanations for those who, like me, are a bit new in this:
Suppose you have a dummy example.js file in the root of your directory:
exports.message = "hi";
exports.say = function () {
console.log(message);
}
Then you require() like this:
$ node
> require('./example.js')
{ message: 'hi', say: [Function] }
If you then add a line like this to example.js:
exports.message = "hi";
exports.say = function () {
console.log(message);
}
exports.farewell = "bye!"; // this line is added later on
And continue in the console, the module is not updated:
> require('./example.js')
{ message: 'hi', say: [Function] }
That's when you can use delete require.cache[require.resolve()] indicated in luff's answer:
> delete require.cache[require.resolve('./example.js')]
true
> require('./example.js')
{ message: 'hi', say: [Function], farewell: 'bye!' }
So the cache is cleaned and the require() captures the content of the file again, loading all the current values.
rewire is great for this use case, you get a new instance with each call. Easy dependency injection for node.js unit testing.
rewire adds a special setter and getter to modules so you can modify their behaviour for better unit testing. You may
inject mocks for other modules or globals like process
leak private variables
override variables within the module.
rewire does not load the file and eval the contents to emulate node's require mechanism. In fact it uses node's own require to load the module. Thus your module behaves exactly the same in your test environment as under regular circumstances (except your modifications).
Good news to all caffeine-addicts: rewire works also with Coffee-Script. Note that in this case CoffeeScript needs to be listed in your devDependencies.
I'd add to luff's answer one more line and change the parameter name:
function requireCached(_module){
var l = module.children.length;
for (var i = 0; i < l; i++)
{
if (module.children[i].id === require.resolve(_module))
{
module.children.splice(i, 1);
break;
}
}
delete require.cache[require.resolve(_module)];
return require(_module)
}
Yes, you can invalidate cache.
The cache is stored in an object called require.cache which you can access directly according to filenames (e.g. - /projects/app/home/index.js as opposed to ./home which you would use in a require('./home') statement).
delete require.cache['/projects/app/home/index.js'];
Our team has found the following module useful. To invalidate certain groups of modules.
https://www.npmjs.com/package/node-resource
I am not 100% certain of what you mean by 'invalidate', but you can add the following above the require statements to clear the cache:
Object.keys(require.cache).forEach(function(key) { delete require.cache[key] })
Taken from #Dancrumb's comment here
requireUncached with relative path: 🔥
const requireUncached = require => module => {
delete require.cache[require.resolve(module)];
return require(module);
};
module.exports = requireUncached;
invoke requireUncached with relative path:
const requireUncached = require('../helpers/require_uncached')(require);
const myModule = requireUncached('./myModule');
I couldn't neatly add code in an answer's comment. But I would use #Ben Barkay's answer then add this to the require.uncache function.
// see https://github.com/joyent/node/issues/8266
// use in it in #Ben Barkay's require.uncache function or along with it. whatever
Object.keys(module.constructor._pathCache).forEach(function(cacheKey) {
if ( cacheKey.indexOf(moduleName) > -1 ) {
delete module.constructor._pathCache[ cacheKey ];
}
});
Say you've required a module, then uninstalled it, then reinstalled the same module but used a different version that has a different main script in its package.json, the next require will fail because that main script does not exists because it's cached in Module._pathCache
If you want a module to simply never be cached (sometimes useful for development, but remember to remove it when done!) you can just put delete require.cache[module.id]; inside the module.
here's my version of this answer, which handles not loading a file if it has (for example) syntax errors
function reacquire(module) {
const fullpath = require.resolve(module);
const backup = require.cache[fullpath];
delete require.cache[fullpath];
try {
const newcopy = require(module);
console.log("reqcquired:",module,typeof newcopy);
return newcopy;
} catch (e) {
console.log("Can't reqcquire",module,":",e.message);
require.cache[fullpath] = backup;
return backup;
}
}
Following two step procedure is working perfectly for me.
After changing Model file i-e 'mymodule.js' dynamically, you need to Delete precompiled model in mongoose model first then reload it using require-reload
Example:
// Delete mongoose model
delete mongoose.connection.models[thisObject.singular('mymodule')]
// Reload model
var reload = require('require-reload')(require);
var entityModel = reload('./mymodule.js');
The documentation says:
Modules are cached in this object when they are required. By deleting a key value from this object, the next require will reload the module. This does not apply to native addons, for which reloading will result in an error.
If it's for unit tests, another good tool to use is proxyquire. Everytime you proxyquire the module, it will invalidate the module cache and cache a new one. It also allows you to modify the modules required by the file that you are testing.
I made a small module to delete module from the cache after loading. This forces reevaluation of the module next time it is required. See https://github.com/bahmutov/require-and-forget
// random.js
module.exports = Math.random()
const forget = require('require-and-forget')
const r1 = forget('./random')
const r2 = forget('./random')
// r1 and r2 will be different
// "random.js" will not be stored in the require.cache
PS: you can also put "self-destruct" into the module itself. See https://github.com/bahmutov/unload-me
PSS: more tricks with Node require in my https://glebbahmutov.com/blog/hacking-node-require/

require() always returns the same instance of a module during unit tests [duplicate]

From the node.js documentation:
Modules are cached after the first time they are loaded. This means (among other things) that every call to require('foo') will get exactly the same object returned, if it would resolve to the same file.
Is there a way to invalidate this cache? i.e. for unit testing, I'd like each test to be working on a fresh object.
You can always safely delete an entry in require.cache without a problem, even when there are circular dependencies. Because when you delete, you just delete a reference to the cached module object, not the module object itself, the module object will not be GCed because in case of circular dependencies, there is still a object referencing this module object.
Suppose you have:
script a.js:
var b=require('./b.js').b;
exports.a='a from a.js';
exports.b=b;
and script b.js:
var a=require('./a.js').a;
exports.b='b from b.js';
exports.a=a;
when you do:
var a=require('./a.js')
var b=require('./b.js')
you will get:
> a
{ a: 'a from a.js', b: 'b from b.js' }
> b
{ b: 'b from b.js', a: undefined }
now if you edit your b.js:
var a=require('./a.js').a;
exports.b='b from b.js. changed value';
exports.a=a;
and do:
delete require.cache[require.resolve('./b.js')]
b=require('./b.js')
you will get:
> a
{ a: 'a from a.js', b: 'b from b.js' }
> b
{ b: 'b from b.js. changed value',
a: 'a from a.js' }
===
The above is valid if directly running node.js. However, if using tools that have their own module caching system, such as jest, the correct statement would be:
jest.resetModules();
If you always want to reload your module, you could add this function:
function requireUncached(module) {
delete require.cache[require.resolve(module)];
return require(module);
}
and then use requireUncached('./myModule') instead of require.
Yes, you can access the cache via require.cache[moduleName] where moduleName is the name of the module you wish to access. Deleting an entry by calling delete require.cache[moduleName] will cause require to load the actual file.
This is how you would remove all cached files associated with the module:
/**
* Removes a module from the cache
*/
function purgeCache(moduleName) {
// Traverse the cache looking for the files
// loaded by the specified module name
searchCache(moduleName, function (mod) {
delete require.cache[mod.id];
});
// Remove cached paths to the module.
// Thanks to #bentael for pointing this out.
Object.keys(module.constructor._pathCache).forEach(function(cacheKey) {
if (cacheKey.indexOf(moduleName)>0) {
delete module.constructor._pathCache[cacheKey];
}
});
};
/**
* Traverses the cache to search for all the cached
* files of the specified module name
*/
function searchCache(moduleName, callback) {
// Resolve the module identified by the specified name
var mod = require.resolve(moduleName);
// Check if the module has been resolved and found within
// the cache
if (mod && ((mod = require.cache[mod]) !== undefined)) {
// Recursively go over the results
(function traverse(mod) {
// Go over each of the module's children and
// traverse them
mod.children.forEach(function (child) {
traverse(child);
});
// Call the specified callback providing the
// found cached module
callback(mod);
}(mod));
}
};
Usage would be:
// Load the package
var mypackage = require('./mypackage');
// Purge the package from cache
purgeCache('./mypackage');
Since this code uses the same resolver require does, just specify whatever you would for require.
"Unix was not designed to stop its users from doing stupid things, as
that would also stop them from doing clever things." – Doug Gwyn
I think that there should have been a way for performing an explicit uncached module loading.
There's a Simple Module for that (with tests)
We had this exact issue while testing our code (delete cached modules so they can be re-required in a fresh state) so we reviewed all the suggestions of people on the various StackOverflow Questions & Answers and put together a simple node.js module (with tests):
https://www.npmjs.com/package/decache
As you would expect, works for both published npm packages and locally defined modules. Windows, Mac, Linux, etc.
How? (usage)
Usage is pretty simple:
install
Install the module from npm:
npm install decache --save-dev
Use it in your code:
// require the decache module:
const decache = require('decache');
// require a module that you wrote"
let mymod = require('./mymodule.js');
// use your module the way you need to:
console.log(mymod.count()); // 0 (the initial state for our counter is zero)
console.log(mymod.incrementRunCount()); // 1
// delete the cached module:
decache('./mymodule.js');
//
mymod = require('./mymodule.js'); // fresh start
console.log(mymod.count()); // 0 (back to initial state ... zero)
If you have any questions or need more examples, please create a GitHub issue:
https://github.com/dwyl/decache/issues
For anyone coming across this who is using Jest, because Jest does its own module caching, there's a built-in function for this - just make sure jest.resetModules runs eg. after each of your tests:
afterEach( function() {
jest.resetModules();
});
Found this after trying to use decache like another answer suggested. Thanks to Anthony Garvan.
Function documentation here.
The solutions is to use:
delete require.cache[require.resolve(<path of your script>)]
Find here some basic explanations for those who, like me, are a bit new in this:
Suppose you have a dummy example.js file in the root of your directory:
exports.message = "hi";
exports.say = function () {
console.log(message);
}
Then you require() like this:
$ node
> require('./example.js')
{ message: 'hi', say: [Function] }
If you then add a line like this to example.js:
exports.message = "hi";
exports.say = function () {
console.log(message);
}
exports.farewell = "bye!"; // this line is added later on
And continue in the console, the module is not updated:
> require('./example.js')
{ message: 'hi', say: [Function] }
That's when you can use delete require.cache[require.resolve()] indicated in luff's answer:
> delete require.cache[require.resolve('./example.js')]
true
> require('./example.js')
{ message: 'hi', say: [Function], farewell: 'bye!' }
So the cache is cleaned and the require() captures the content of the file again, loading all the current values.
rewire is great for this use case, you get a new instance with each call. Easy dependency injection for node.js unit testing.
rewire adds a special setter and getter to modules so you can modify their behaviour for better unit testing. You may
inject mocks for other modules or globals like process
leak private variables
override variables within the module.
rewire does not load the file and eval the contents to emulate node's require mechanism. In fact it uses node's own require to load the module. Thus your module behaves exactly the same in your test environment as under regular circumstances (except your modifications).
Good news to all caffeine-addicts: rewire works also with Coffee-Script. Note that in this case CoffeeScript needs to be listed in your devDependencies.
I'd add to luff's answer one more line and change the parameter name:
function requireCached(_module){
var l = module.children.length;
for (var i = 0; i < l; i++)
{
if (module.children[i].id === require.resolve(_module))
{
module.children.splice(i, 1);
break;
}
}
delete require.cache[require.resolve(_module)];
return require(_module)
}
Yes, you can invalidate cache.
The cache is stored in an object called require.cache which you can access directly according to filenames (e.g. - /projects/app/home/index.js as opposed to ./home which you would use in a require('./home') statement).
delete require.cache['/projects/app/home/index.js'];
Our team has found the following module useful. To invalidate certain groups of modules.
https://www.npmjs.com/package/node-resource
I am not 100% certain of what you mean by 'invalidate', but you can add the following above the require statements to clear the cache:
Object.keys(require.cache).forEach(function(key) { delete require.cache[key] })
Taken from #Dancrumb's comment here
requireUncached with relative path: 🔥
const requireUncached = require => module => {
delete require.cache[require.resolve(module)];
return require(module);
};
module.exports = requireUncached;
invoke requireUncached with relative path:
const requireUncached = require('../helpers/require_uncached')(require);
const myModule = requireUncached('./myModule');
I couldn't neatly add code in an answer's comment. But I would use #Ben Barkay's answer then add this to the require.uncache function.
// see https://github.com/joyent/node/issues/8266
// use in it in #Ben Barkay's require.uncache function or along with it. whatever
Object.keys(module.constructor._pathCache).forEach(function(cacheKey) {
if ( cacheKey.indexOf(moduleName) > -1 ) {
delete module.constructor._pathCache[ cacheKey ];
}
});
Say you've required a module, then uninstalled it, then reinstalled the same module but used a different version that has a different main script in its package.json, the next require will fail because that main script does not exists because it's cached in Module._pathCache
If you want a module to simply never be cached (sometimes useful for development, but remember to remove it when done!) you can just put delete require.cache[module.id]; inside the module.
here's my version of this answer, which handles not loading a file if it has (for example) syntax errors
function reacquire(module) {
const fullpath = require.resolve(module);
const backup = require.cache[fullpath];
delete require.cache[fullpath];
try {
const newcopy = require(module);
console.log("reqcquired:",module,typeof newcopy);
return newcopy;
} catch (e) {
console.log("Can't reqcquire",module,":",e.message);
require.cache[fullpath] = backup;
return backup;
}
}
Following two step procedure is working perfectly for me.
After changing Model file i-e 'mymodule.js' dynamically, you need to Delete precompiled model in mongoose model first then reload it using require-reload
Example:
// Delete mongoose model
delete mongoose.connection.models[thisObject.singular('mymodule')]
// Reload model
var reload = require('require-reload')(require);
var entityModel = reload('./mymodule.js');
The documentation says:
Modules are cached in this object when they are required. By deleting a key value from this object, the next require will reload the module. This does not apply to native addons, for which reloading will result in an error.
If it's for unit tests, another good tool to use is proxyquire. Everytime you proxyquire the module, it will invalidate the module cache and cache a new one. It also allows you to modify the modules required by the file that you are testing.
I made a small module to delete module from the cache after loading. This forces reevaluation of the module next time it is required. See https://github.com/bahmutov/require-and-forget
// random.js
module.exports = Math.random()
const forget = require('require-and-forget')
const r1 = forget('./random')
const r2 = forget('./random')
// r1 and r2 will be different
// "random.js" will not be stored in the require.cache
PS: you can also put "self-destruct" into the module itself. See https://github.com/bahmutov/unload-me
PSS: more tricks with Node require in my https://glebbahmutov.com/blog/hacking-node-require/

Can I load multiple files with one require statement?

maybe this question is a little silly, but is it possible to load multiple .js files with one require statement? like this:
var mylib = require('./lib/mylibfiles');
and use:
mylib.foo(); //return "hello from one"
mylib.bar(): //return "hello from two"
And in the folder mylibfiles will have two files:
One.js
exports.foo= function(){return "hello from one";}
Two.js
exports.bar= function(){return "hello from two";}
I was thinking to put a package.json in the folder that say to load all the files, but I don't know how. Other aproach that I was thinking is to have a index.js that exports everything again but I will be duplicating work.
Thanks!!
P.D: I'm working with nodejs v0.611 on a windows 7 machine
First of all using require does not duplicate anything. It loads the module and it caches it, so calling require again will get it from memory (thus you can modify module at fly without interacting with its source code - this is sometimes desirable, for example when you want to store db connection inside module).
Also package.json does not load anything and does not interact with your app at all. It is only used for npm.
Now you cannot require multiple modules at once. For example what will happen if both One.js and Two.js have defined function with the same name?? There are more problems.
But what you can do, is to write additional file, say modules.js with the following content
module.exports = {
one : require('./one.js'),
two : require('./two.js'),
/* some other modules you want */
}
and then you can simply use
var modules = require('./modules.js');
modules.one.foo();
modules.two.bar();
I have a snippet of code that requires more than one module, but it doesn't clump them together as your post suggests. However, that can be overcome with a trick that I found.
function requireMany () {
return Array.prototype.slice.call(arguments).map(function (value) {
try {
return require(value)
}
catch (event) {
return console.log(event)
}
})
}
And you use it as such
requireMany("fs", "socket.io", "path")
Which will return
[ fs {}, socketio {}, path {} ]
If a module is not found, an error will be sent to the console. It won't break the programme. The error will be shown in the array as undefined. The array will not be shorter because one of the modules failed to load.
Then you can bind those each of those array elements to a variable name, like so:
var [fs, socketio, path] = requireMany("fs", "socket.io", "path")
It essentially works like an object, but assigns the keys and their values to the global namespace. So, in your case, you could do:
var [foo, bar] = requireMany("./foo.js", "./bar.js")
foo() //return "hello from one"
bar() //return "hello from two"
And if you do want it to break the programme on error, just use this modified version, which is smaller
function requireMany () {
return Array.prototype.slice.call(arguments).map(require)
}
Yes, you may require a folder as a module, according to the node docs. Let's say you want to require() a folder called ./mypack/.
Inside ./mypack/, create a package.json file with the name of the folder and a main javascript file with the same name, inside a ./lib/ directory.
{
"name" : "mypack",
"main" : "./lib/mypack.js"
}
Now you can use require('./mypack') and node will load ./mypack/lib/mypack.js.
However if you do not include this package.json file, it may still work. Without the file, node will attempt to load ./mypack/index.js, or if that's not there, ./mypack/index.node.
My understanding is that this could be beneficial if you have split your program into many javascript files but do not want to concatenate them for deployment.
You can use destructuring assignment to map an array of exported modules from require statements in one line:
const requires = (...modules) => modules.map(module => require(module));
const [fs, path] = requires('fs', 'path');
I was doing something similar to what #freakish suggests in his answer with a project where I've a list of test scripts that are pulled into a Puppeteer + Jest testing setup. My test files follow the naming convention testname1.js - testnameN.js and I was able use a generator function to require N number of files from the particular directory with the approach below:
const fs = require('fs');
const path = require('path');
module.exports = class FilesInDirectory {
constructor(directory) {
this.fid = fs.readdirSync(path.resolve(directory));
this.requiredFiles = (this.fid.map((fileId) => {
let resolvedPath = path.resolve(directory, fileId);
return require(resolvedPath);
})).filter(file => !!file);
}
printRetrievedFiles() {
console.log(this.requiredFiles);
}
nextFileGenerator() {
const parent = this;
const fidLength = parent.requiredFiles.length;
function* iterate(index) {
while (index < fidLength) {
yield parent.requiredFiles[index++];
}
}
return iterate(0);
}
}
Then use like so:
//Use in test
const FilesInDirectory = require('./utilities/getfilesindirectory');
const StepsCollection = new FilesInDirectory('./test-steps');
const StepsGenerator = StepsCollection.nextFileGenerator();
//Assuming we're in an async function
await StepsGenerator.next().value.FUNCTION_REQUIRED_FROM_FILE(someArg);

Detect if called through require or directly by command line

How can I detect whether my Node.JS file was called using SH:node path-to-file or JS:require('path-to-file')?
This is the Node.JS equivalent to my previous question in Perl: How can I run my Perl script only if it wasn't loaded with require?
if (require.main === module) {
console.log('called directly');
} else {
console.log('required as a module');
}
See documentation for this here: https://nodejs.org/docs/latest/api/modules.html#modules_accessing_the_main_module
There is another, slightly shorter way (not outlined in the mentioned docs).
var runningAsScript = !module.parent;
I outlined more details about how this all works under the hood in this blog post.
For those using ES Modules (and Node 10.12+), you can use import.meta.url:
import path from 'path';
import { fileURLToPath } from 'url'
const nodePath = path.resolve(process.argv[1]);
const modulePath = path.resolve(fileURLToPath(import.meta.url))
const isRunningDirectlyViaCLI = nodePath === modulePath
Things like require.main, module.parent and __dirname/__filename aren’t available in ESM.
Note: If using ESLint it may choke on this syntax, in which case you’ll need to update to ESLint ^7.2.0 and turn your ecmaVersion up to 11 (2020).
More info: process.argv, import.meta.url
I was a little confused by the terminology used in the explanation(s). So I had to do a couple quick tests.
I found that these produce the same results:
var isCLI = !module.parent;
var isCLI = require.main === module;
And for the other confused people (and to answer the question directly):
var isCLI = require.main === module;
var wasRequired = !isCLI;
Try this if you are using ES6 modules:
if (process.mainModule.filename === __filename) {
console.log('running as main module')
}
I always find myself trying to recall how to write this goddamn code snippet, so I decided to create a simple module for it. It took me a bit to make it work since accessing caller's module info is not straightforward, but it was fun to see how it could be done.
So the idea is to call a module and ask it if the caller module is the main one. We have to figure out the module of the caller function. My first approach was a variation of the accepted answer:
module.exports = function () {
return require.main === module.parent;
};
But that is not guaranteed to work. module.parent points to the module which loaded us into memory, not the one calling us. If it is the caller module that loaded this helper module into memory, we're good. But if it isn't, it won't work. So we need to try something else. My solution was to generate a stack trace and get the caller's module name from there:
module.exports = function () {
// generate a stack trace
const stack = (new Error()).stack;
// the third line refers to our caller
const stackLine = stack.split("\n")[2];
// extract the module name from that line
const callerModuleName = /\((.*):\d+:\d+\)$/.exec(stackLine)[1];
return require.main.filename === callerModuleName;
};
Save this as is-main-module.js and now you can do:
const isMainModule = require("./is-main-module");
if (isMainModule()) {
console.info("called directly");
} else {
console.info("required as a module");
}
Which is easier to remember.
First, let's define the problem better. My assumption is that what you are really looking for is whether your script owns process.argv (i.e. whether your script is responsible for processing process.argv). With this assumption in mind, the code and tests below are accurate.
module.parent works excellently, but it is deprecated for good reasons (a module might have multiple parents, in which case module.parent only represents the first parent), so use the following future-proof condition to cover all cases:
if (
typeof process === 'object' && process && process.argv
&& (
(
typeof module === 'object' && module
&& (
!module.parent
|| require.main === module
|| (process.mainModule && process.mainModule.filename === __filename)
|| (__filename === "[stdin]" && __dirname === ".")
)
)
|| (
typeof document === "object"
&& (function() {
var scripts = document.getElementsByTagName("script");
try { // in case we are in a special environment without path
var normalize = require("path").normalize;
for (var i=0,len=scripts.length|0; i < len; i=i+1|0)
if (normalize(scripts[i].src.replace(/^file:/i,"")) === __filename)
return true;
} catch(e) {}
})()
)
)
) {
// this module is top-level and invoked directly by the CLI
console.log("Invoked from CLI");
} else {
console.log("Not invoked from CLI");
}
It works correctly in all of the scripts in all of the following cases and never throws any errors†:
Requiring the script (e.x. require('./main.js'))
Directly invoking the script (e.x. nodejs cli.js)
Preloading another script (e.x. nodejs -r main.js cli.js)
Piping into node CLI (e.x. cat cli.js | nodejs)
Piping with preloading (e.x. cat cli.js | nodejs -r main.js)
In workers (e.x. new Worker('./worker.js'))
In evaled workers (e.x. new Worker('if (<test for CLI>) ...', {eval: true}))
Inside ES6 modules (e.x. nodejs --experimental-modules cli-es6.js)
Modules with preload (e.x. nodejs --experimental-modules -r main-es6.js cli-es6.js)
Piped ES6 modules (e.x. cat cli-es6.js | nodejs --experimental-modules)
Pipe+preload module (e.x. cat cli-es6.js | nodejs --experimental-modules -r main-es6.js)
In the browser (in which case, CLI is false because there is no process.argv)
In mixed browser+server environments (e.x. ElectronJS, in which case both inline scripts and all modules loaded via <script> tags are considered CLI)
The only case where is does not work is when you preload the top-level script (e.x. nodejs -r cli.js cli.js). This problem cannot be solved by piping (e.x. cat cli.js | nodejs -r cli.js) because that executes the script twice (once as a required module and once as top-level). I do not believe there is any possible fix for this because there is no way to know what the main script will be from inside a preloaded script.
† Theoretically, errors might be thrown from inside of a getter for an object (e.x. if someone were crazy enough to do Object.defineProperty(globalThis, "process", { get(){throw 0} });), however this will never happen under default circumstances for the properties used in the code snippet in any environment.
How can I detect whether my node.js file was called directly from console (windows and unix systems) or loaded using the ESM module import ( import {foo} from 'bar.js')
Such functionality is not exposed. For the moment you should separate your cli and library logic into separate files.
Answer from node.js core contributor devsnek replying to nodejs/help/issues/2420
It's the right answer in my point of view

Resources