Please find below a sample code in nodejs:
var hello_file = require.resolve('hello')
var hello = require('hello')
console.log(m.hello()); // there is a method hello in module hello.js
delete require.cache[hello_file]
console.log(m.hello()); // it still works
I thought the delete would remove the reference to module and hence the last line should throw an error. But it does not. What could be the reason and what does delete cache really mean?
The cache doesn't know about it anymore but your var hello still has a reference to what was previously loaded.
The next time you call require('hello') it will load the module from the file. But, until you update the reference that var hello is holding, it will continue to point to the originally loaded module.
As you know, node would load a module once even if you require many times, Modules are cached after the first time they are loaded. If you delete it from cache, it will reload the module from filesystem to the cache the next time you require.
Related
I am using a particular module in my Node code:
const example = require('example');
However this module is slow to be updated, so I have forked it and published it with my updates under my own scope on npmjs.com. However now to use my own module, I must change every use in my code:
const example = require('#my-username/example');
The problem with this is that I will have to commit a bunch of changes throughout many files to rename the module, then when upstream merges my changes into the official version I will have to update my code again to remove the scope operator from require() across all those files, then add it back if I have more changes that are slow to be accepted, and so on.
Is there a way that I can tell Node or NPM that if require() can't find a module with no scope in the name, to then check all the #scope folders in node_modules to see if there's a match there?
If that were possible then I would only need to update package.json with the relevant package version and the code itself could remain unchanged as I switch between my fork and the official version.
you can implement it using module-alias
This will slow down your startup, but let you write all this logic for every requires you make in your application.
const moduleAlias = require('module-alias')
// Custom handler function (starting from v2.1)
moduleAlias.addAlias('request', (fromPath, request, alias) => {
console.log({
fromPath,
request,
alias,
});
return __dirname + '/my-custom-request'
})
require('request')
Hi I am structuring my Node.js project based on this, like so:
Root
product name
index.js: (contains requires for the product and the main export)
productName.js: contains application logic
test
test1.js
test2.js
...
Now I have two questions
What should logically go in index.js? At the moment I have this (would this be a good way to do things and what else might I include in index.js?):
// index.js
var myServer = require('./myServer.js'); // "product name" = "myServer"
module.exports = {
run: myServer.listen
}
Does it matter what I call the object key in module.exports (currently "run")? Why does the server always run when I execute index.js with $ node index.js how does it automatically know to run myServer.listen?
P.S.: I am aware of web structure auto-generation tools, I just wish to understand the logical reason for this suggested structure (the idea of not having any logic in index.js)
As you mentioned this is a Express service, if it is only handling backend of some application or more specifically this is only backend application, I would suggest you change name of your index.js to server.js(Thus explicitly stating that it'll process all service requests).
But if not then even index.js is fine.
Now for
1
What you've put is absolutely fine, apart from this you could require all modules, routes(or controllers whatever you name them) in a way that it serves as entry point to your application. Try not to put any logic in here.
2
Actually the server runs because it executes the script in the file called index.js, the script says myServer.listen, now if you had written console.log("Hello World") and used $ node index.js it would've printed Hello World instead.
Node just expects and executes script that is there in index.js, in your case it is to start the server.
About the logic that why not put anything else in index.js, for me the reasoning I consider good enough is it provides abstraction as it is the entry point I don't want index.js to worry about things like what to do with this data and all. I believe it should provide a base to setup server. Thus following single responsibility to some extent. Also I won't have to touch it ever in projects lifetime unless some major change occurs e.g. I decide to shift from express to something else.
EDIT
Why have a key called run
You seem to have answered it yourself(in comments), you are giving or more proper description would be you're attaching an object to module.exports as it is a object similar to JSON it was supposed to have a key(which could be anything not necessarily run it could've been hii). Now if you don't want to pass a key and export only one thing that is server.listen then you could write same as module.exports = myServer.listen; instead of
module.exports = {
hii: myServer.listen
}
Note that you could export more modules using the way you did. For more details about module.exports refer this or better google about it as this link might expire anytime and does not seem an ideal thing to put on SO.
I'm getting inconsistent results from Angular/Karma/Jasmine. When I run 'npm test', I get:
INFO [karma]: Karma v0.10.10 server started at http://localhost:9876/
INFO [launcher]: Starting browser Chrome
INFO [Chrome 35.0.1916 (Linux)]: Connected on socket aW0Inld7aRhONC2vo04k
Chrome 35.0.1916 (Linux): Executed 1 of 1 SUCCESS (0.345 secs / 0.016 secs)
Then, if I just save the code or test file (no changes), it will sometimes have the same results, sometimes it gives errors:
INFO [watcher]: Removed file "/home/www/raffler/entries.js".
Chrome 35.0.1916 (Linux) Raffler controllers RafflerCtrl should start with an empty masterList FAILED
Error: [$injector:modulerr] Failed to instantiate module raffler due to:
Error: [$injector:nomod] Module 'raffler' is not available! You either misspelled the module name or forgot to load it. If registering a module ensure that you specify the dependencies as the second argument.
This is WITHOUT MAKING CHANGES. If I stop Karma and restart it works again, but once it fails it always fails. What gives? Buggy Angular/Jasmine/Karma? The code and test are trivial. Here is the code:
var myApp = angular.module('raffler', []);
myApp.controller('RafflerCtrl', function($scope, $log) {
$scope.$log = $log;
$scope.masterList = [];
});
And here is the test:
'use strict';
describe('Raffler controllers', function() {
describe('RafflerCtrl', function(){
var scope, ctrl;
beforeEach(module('raffler'));
beforeEach(inject(function($controller) {
scope = {};
ctrl = $controller('RafflerCtrl', {$scope:scope});
}));
it('should start with an empty masterList', function() {
expect(scope.masterList.length).toBe(0);
});
});
});
Am I doing something dumb? Seems like it should give me consistent results, regardless of my stupidity level... Thanks.
You were asking if there was a bug. There is. The authors of Karma know that there are problems with file watching. See this issue: https://github.com/karma-runner/karma/issues/974
Simply saving the file without changes can trigger this behavior. There's two main ways that files get saved. The first is to delete the original (or rename it to .bak or something) and then write out the new content. The second method writes the new content to a temporary file, deletes the original and then moves the temporary file to where the original used to be. For both of those the file system monitoring can fire an event saying that some files/directories changed. Node is quick enough to be able to detect the file was removed and tells Karma to stop using it in its tests. A slightly less used third way is to open the file in a special way to overwrite the contents, which will keep Karma happy.
Back to that bug ... in the above scenario, the globs are not re-evaluated when file system changes are detected. So it thinks your file was removed. It never saw that a new file was added, thus it's now out of the test suite.
This bug is bothering me too. If you've got an idea or a pull request then I'd suggest providing it to the Karma team. There's already a patch being reviewed that should address these problems - see https://github.com/karma-runner/karma/issues/1123.
As a workaround, you can use "set backupcopy=yes" for vim. There may be settings in other editors to change the behavior so that the file is overwritten instead of replaced.
I'm using a file for storing JSON data. My module makes CRUD actions on the file and I'm using require() to load the json, instead of fs.readFile(). The issue is, if the file is deleted, using fs.unlink(), then calling the file again using require still loads the file... which has just been deleted. I'm a bit lost how to get around this, possibly #garbage-collection?
Example:
fs.writeFile('foo.json', JSON.stringify({foo:"bar"}), function(){
var j = require('./foo.json')
fs.unlink('./foo.json', function(){
console.log('File deleted')
var j = require('./foo.json')
console.log(j)
})
})
When loading a module using require, Node.js caches the loaded module internally so that subsequent calls to require do not need to access the drive again. The same is true for .json files, when loaded using require.
That's why your file still is "loaded", although you deleted it.
The solution to this issue is to use the function for loading a file that is appropriate for it, which you already mentioned: fs.readFile(). Once you use that, everything will work as expected.
Let say, after I require a module and do something as below:
var b = require('./b.js');
--- do something with b ---
Then I want to take away module b (i.e. clean up the cache). how I can do it?
The reason is that I want to dynamically load/ remove or update the module without restarting node server. any idea?
------- more --------
based on the suggestion to delete require.cache, it still doesn't work...
what I did are few things:
1) delete require.cache[require.resolve('./b.js')];
2) loop for every require.cache's children and remove any child who is b.js
3) delete b
However, when i call b, it is still there! it is still accessible. unless I do that:
b = {};
not sure if it is a good way to handle that.
because if later, I require ('./b.js') again while b.js has been modified. Will it require the old cached b.js (which I tried to delete), or the new one?
----------- More finding --------------
ok. i do more testing and playing around with the code.. here is what I found:
1) delete require.cache[] is essential. Only if it is deleted,
then the next time I load a new b.js will take effect.
2) looping through require.cache[] and delete any entry in the
children with the full filename of b.js doesn't take any effect. i.e.
u can delete or leave it. However, I'm unsure if there is any side
effect. I think it is a good idea to keep it clean and delete it if
there is no performance impact.
3) of course, assign b={} doesn't really necessary, but i think it is
useful to also keep it clean.
You can use this to delete its entry in the cache:
delete require.cache[require.resolve('./b.js')]
require.resolve() will figure out the full path of ./b.js, which is used as a cache key.
Spent some time trying to clear cache in Jest tests for Vuex store with no luck. Seems like Jest has its own mechanism that doesn't need manual call to delete require.cache.
beforeEach(() => {
jest.resetModules();
});
And tests:
let store;
it("1", () => {
process.env.something = true;
store = require("#/src/store.index");
});
it("2", () => {
process.env.something = false;
store = require("#/src/store.index");
});
Both stores will be different modules.
One of the easiest ways (although not the best in terms of performance as even unrelated module's caches get cleared) would be to simply purge every module in the cache
Note that clearing the cache for *.node files (native modules) might cause undefined behaviour and therefore is not supported (https://github.com/nodejs/node/commit/5c14d695d2c1f924cf06af6ae896027569993a5c), so there needs to be an if statement to ensure those don't get removed from the cache, too.
for (const path in require.cache) {
if (path.endsWith('.js')) { // only clear *.js, not *.node
delete require.cache[path]
}
}
I found this useful for client side applications. I wanted to import code as I needed it and then garbage collect it when I was done. This seems to work. I'm not sure about the cache, but it should get garbage collected once there is no more reference to module and CONTAINER.sayHello has been deleted.
/* my-module.js */
function sayHello { console.log("hello"); }
export { sayHello };
/* somewhere-else.js */
const CONTAINER = {};
import("my-module.js").then(module => {
CONTAINER.sayHello = module.sayHello;
CONTAINER.sayHello(); // hello
delete CONTAINER.sayHello;
console.log(CONTAINER.sayHello); // undefined
});
I have found the easiest way to handle invalidating the cache is actually to reset the exposed cache object. When deleting individual entries from the cache, the child dependencies become a bit troublesome to iterate through.
require.cache = {};