Immediately invoked function expression isn't executed on second require - node.js

Here is a sample code to make it more clear.
This is my module:
module.exports = ((parameter) => {
console.log(parameter);
})('some value');
mymodule is actually the main server file and thus has to be executed immediately.
My unit tests in test suite require my module but the immediately invoked function expression is executed only first time.
require('./mymodule');
require('./mymodule');
How can I make this run every time ?

You can invalidate the cache that Node maintains:
require('./mymodule');
delete require.cache[require.resolve('./mymodule')]
require('./mymodule');
You are probably better off though with exporting the function and calling it when needed, instead of only exporting the result.

From nodejs docs:
Multiple calls to require('foo') may not cause the module code to be
executed multiple times. This is an important feature. With it,
"partially done" objects can be returned, thus allowing transitive
dependencies to be loaded even when they would cause cycles.
https://nodejs.org/docs/v0.4.12/api/modules.html#caching
If you want to have a module execute code multiple times, then export a function, and call that function.

Related

synchronous require so entire Node.js script isn't in callback

When using WebAssembly, there is a callback for onRuntimeInitialized(). You basically can't do anything until it happens.
So if you have a library that is implemented in it, you have to say:
var mylib = require('mylib')
mylib.onRuntimeInitialized = function() {
...
// Anything that wants to use *anything* from mylib
// (doesn't matter if it's synchronous or asynchronous)
...
}
On the plus side, you're not making Node wait to do any initialization that might not rely on mylib...so other modules can be doing fetches or whatever they need. On the negative side, it's pretty bad ergonomics--especially if everything you're doing depends on this library.
One possibility might seem to be to fold the initialization waiting into a promise, and then wait on it:
var mylib = require('mylib')
await mylib.Startup()
But people apparently write about how much they don't like the idea of top-level AWAIT. And my opinion on it is fairly irrelevant either way, as it's not allowed. :-/
So is there really no way to hold up code at the top level besides wrapping the whole app in a callback?
One thing to note with Node is that requires will return the same object, no matter what file that require is called in. Order does matter, but it will be the same object in all files.
So in your main index.js you could do something like
var myLib = require('mylib')
myLib.libby = myLib.initialize()
and then in another file, doesStuff.js, you can do:
const libby = require('mlib').libby
module.exports = function doStuff() {
/* do stuff with initialized libby object */
}
Typically the way this works is that call in doStuff.js is not called until everything is initialized and say the web route is handled. So your server is running already and so libby will be initialized and ready to use once it's called.
If you have something that absolutely cannot fail, like the server will not run if DB connection is not successful or something, then waiting is appropriate, so yes, you'd need to wrap everything (at least the core of your actions) in a callback so your server knows when it's safe to start.

JestJS testing synchronous code shows asynchronous behavior

I am new to JestJS and trying to test synchronous code. The following test passes alright:
bin = new Compiler().compile('{int a = 42;}');
test('Integer constant declaration', function() {
expect(bin.dumpVariables()).toBe("[int const a = 42]\n");
});
However when I append this code:
bin = new Compiler().compile('{bool b;}');
test('Another test', function() { ... }
... the first test fails because bin already has the new value from the assignment that comes after it. Why is that? My code is fully synchronous so I would have expected the first test to pass and THEN the effects of the code that follows it.
Cause bin lives in the scope of both test. But the main problem is that all code that is written outside of one of the test, describe, it blocks and so on is called immediately. Lets have a look how a test call looks like:
test('testname', testFunction)
this means the test is added to test suite with a name and a function, but this doesn't mean that the testFunction is called immediately. If it would work like this a beforeEach block wouldn't work. So first the test suite collects all the different test, describe, it calls and then it runs them. If you now create a variable in the upper scope outside of the tests it will have the value that was assigned to last. In your case {bool b;}

Creating a yieldable Node Module/Object

I am trying to create a Node module (using harmony) that upon loading by another module/application, has to be yielded to so that things in it's construct can be executed and loaded before any of it's exposed functions can be called.
The issue I am having is that I cannot seem to yield to the internal function that is being executed, using module.exports. An example would help.
module.exports = function*(s_id){
console.log('loading the module lets it execute up till here');
if (!(this instanceof Tester)) return yield new Tester();
}
function* Tester(){
console.log('but we never execute this generator function');
}
Tester.prototype = {
model : function*(){
// other functions
}
}
It's been stumping me for hours now! I feel like the solution is super simple but I cannot seem to wrap my head around it. I have tried to simply make the Tester() function the export, but am still having the same issue. Why can't I seem to yield to the Tester() function?
Also, what may an alternative be to this approach? I want to maintain the Object nature of the module so that the module can be loaded with different inputs, such as the s_id variable/object in the example above.
a Node module (using harmony) that upon loading by another module/application, has to be yielded to so that things in it's construct can be executed and loaded before any of it's exposed functions can be called
Don't do that. Generators are not made for asynchrony. yield doesn't do what you want here. A module is not "yielded" to await something in it to load. yield is magic, but not async magic.
If you must use an asynchronous module loading process, export a promise for your actual module. That is a standard interface for something to await, and can be consumed using a standardized approach that does not rely on internals of your module.
You still can use yield syntax for constructing that promise, just use your favorite coroutine library.
return yield new Tester();
…
function* Tester(){…}
Uh oh. Well yes, apparently it is possible to call generator functions as constructors. But believe me, it is not what you want. A constructor for an arbitrary object should return that object, instead of an iterator (just like it shouldn't return a promise). If some of your object methods are generator methods, that's fine, but your constructor should be not.
If you really want to use a generator function in your code like that (and it's not meant to be a constructor), you
will need to yield* the iterator you've created (tester()) instead of yielding it
must not overwrite its .prototype property like you did, as that causes the iterator to malfunction. (Of course you should never do that at all, even though most of the time it works)

Require.js (almond.js) Timing Off

I'm trying to use Almond.js and the Require.js optimizer to make a standalone file that can be used without Require.js. I've been able to successfully compile the file, and all the code runs, but there's one problem: the code executes in the wrong order.
Let's say my main file (the one that's in the include= option passed to the optimizer) has the following:
console.log('outside');
require(['someFile'], function(someModule) {
console.log('inside');
window.foo = someModule.rightValue;
});
and then out in my HTML I have:
<script src="myCompiledFile.js"></script>
<script>
console.log('out in the HTML');
</script>
Since I'm using almond.js and a compiled file (and therefore there's no file loading going on) I would expect to get the output:
outside
inside
out in the HTML
However, it turns out there's still some asynchronicity going on, because what I actually see is:
outside
out in the HTML
inside
and if I try to check window.foo from the HTML it's not there.
So, my question is, how can I get the code to be more like a normal/synchronous JS file, which runs all its code first before passing things on to the next script block? I can't exactly tell my users "wrap all your code in a window.setTimeout".
By default Almond replicates the timing semantics of RequireJS' require call. Since require is asynchronous in RequireJS, it is also asynchronous in Almond. This can be readily observed in the source: Almond uses a setTimeout to schedule the execution of a module even though it could execute it right away. It makes sense because in the general case developers don't expect that the code they've crafted to work asynchronously will suddenly become synchronous just because they are using Almond. It does not matter in every project but in a project where the UI (for instance) should be refreshed in a certain order, changing the timing semantics could break the code.
Two options:
As the comment just before the setTimeout states, using require('id') works globally with Almond. This is because once your bundle is loaded, everything is guaranteed to have been loaded (since Almond does not load dynamically).
There's also a way to call require with extra arguments. If the fourth argument is true, the call is going to be synchronous:
require(['someFile'], function(someModule) {
console.log('inside');
window.foo = someModule.rightValue;
}, undefined, true);
This can be ascertained by reading the code of Almond. There's no documentation I could find on forceSync (that's the name of the parameter in question) but some bug reports mention it and I've not seen any comment there that it is meant to be a private functionality.

Understanding Node.js modules: multiple requires return the same object?

I have a question related to the node.js documentation on module caching:
Modules are cached after the first time they are loaded. This means
(among other things) that every call to require('foo') will get
exactly the same object returned, if it would resolve to the same
file.
Multiple calls to require('foo') may not cause the module code to be
executed multiple times. This is an important feature. With it,
"partially done" objects can be returned, thus allowing transitive
dependencies to be loaded even when they would cause cycles.
What is meant with may?
I want to know if require will always return the same object. So in case I require a module A in app.js and change the exports object within app.js (the one that require returns) and after that require a module B in app.js that itself requires module A, will I always get the modified version of that object, or a new one?
// app.js
var a = require('./a');
a.b = 2;
console.log(a.b); //2
var b = require('./b');
console.log(b.b); //2
// a.js
exports.a = 1;
// b.js
module.exports = require('./a');
If both app.js and b.js reside in the same project (and in the same directory) then both of them will receive the same instance of A. From the node.js documentation:
... every call to require('foo') will get exactly the same object returned, if it would resolve to the same file.
The situation is different when a.js, b.js and app.js are in different npm modules. For example:
[APP] --> [A], [B]
[B] --> [A]
In that case the require('a') in app.js would resolve to a different copy of a.js than require('a') in b.js and therefore return a different instance of A. There is a blog post describing this behavior in more detail.
node.js has some kind of caching implemented which blocks node from reading files 1000s of times while executing some huge server-projects.
This cache is listed in the require.cache object. I have to note that this object is read/writeable which gives the ability to delete files from the cache without killing the process.
http://nodejs.org/docs/latest/api/globals.html#require.cache
Ouh, forgot to answer the question. Modifying the exported object does not affect the next module-loading. This would cause much trouble... Require always return a new instance of the object, no reference. Editing the file and deleting the cache does change the exported object
After doing some tests, node.js does cache the module.exports. Modifying require.cache[{module}].exports ends up in a new, modified returned object.
Since the question was posted, the document has been updated to make it clear why "may" was originally used. It now answers the question itself by making things explicit (my emphasis to show what's changed):
Modules are cached after the first time they are loaded. This means
(among other things) that every call to require('foo') will get
exactly the same object returned, if it would resolve to the same
file.
Provided require.cache is not modified, multiple calls to
require('foo') will not cause the module code to be executed multiple
times. This is an important feature. With it, "partially done" objects
can be returned, thus allowing transitive dependencies to be loaded
even when they would cause cycles.
For what I have seen, if the module name resolve to a file previosuly loaded, the cached module will be returned, otherwise the new file will be loaded separately.
That is, caching is based on the actual file name that gets resolved. This is because, in general, there can be different versions of the same package that are installed at different levels of the file hierarchy and that must be loaded accordingly.
What I am not sure about is wether there are cases of cache invalidation not under the programmer's control or awareness, that might make it possible to accidentaly reload the very same package file multiple times.
In case the reason why you want require(x) to return a fresh object every time is just because you modify that object directly - which is a case I ran into - just clone it, and modify and use only the clone, like this:
var a = require('./a');
a = JSON.parse(JSON.stringify(a));
try drex: https://github.com/yuryb/drex
drex is watching a module for updates and cleanly re-requires the
module after the update. New code is being require()d as if the new
code is a totally different module, so require.cache is not a problem.
When you require an object, you are requiring its reference address, and by requiring the object twice, you will get the same address! To have copies of the same object, You should copy (clone) it.
var obj = require('./obj');
a = JSON.parse(JSON.stringify(obj));
b = JSON.parse(JSON.stringify(obj));
c = JSON.parse(JSON.stringify(obj));
Cloning is done in multiple ways, you can see this, for further information.

Resources