Why is the Object.keys function on the global object not returning the process field?
> Object.keys(global)
[
'global',
'clearInterval',
'clearTimeout',
'setInterval',
'setTimeout',
'queueMicrotask',
'clearImmediate',
'setImmediate'
]
> global.process
process {
version: 'v13.7.0',
versions: {
node: '13.7.0',
v8: '7.9.317.25-node.28',
uv: '1.34.1',
process is not actually a key on the global object as of node version 12 (Described in the changelog https://nodejs.org/tr/blog/uncategorized/10-lts-to-12-lts/#notable-changes-in-node-js-12-0-0, implemented through PR https://github.com/nodejs/node/pull/26882). Instead it has a defined getter for it. E.g. try out global.__lookupGetter__('process') and you should see that you get a function back.
> Object.keys(global)
[
'global',
'clearInterval',
'clearTimeout',
'setInterval',
'setTimeout',
'queueMicrotask',
'clearImmediate',
'setImmediate'
]
> global.__lookupGetter__('process')
[Function: get]
> global.__lookupGetter__('process')()
process {
version: 'v13.7.0',
versions: {
node: '13.7.0',
...
The reason for the change as described in the PR:
This implements just the semver major breaking changes from #26334 Restrict process and Buffer globals to CommonJS, which is to make global.process and global.Buffer getters / setters over value properties.
...
The other referenced PR is https://github.com/nodejs/node/pull/26334 which is described as:
This PR deprecates access to global.process and global.Buffer access in ECMAScript modules only, while ensuring they continue to behave fine for all CommonJS modules and contexts providing comprehensive backwards compatibility.
This is done by making them getters which check the context in which they are called and throw when inside an ECMAScript module. To avoid this getter slowpath in accesses, process and Buffer are also added as a context to the compileFunction wrapper in CJS. In addition these getters are only defined as soon as there is a load of an ECMAScript module to avoid any slowdowns in these cases.
The benefits of this are that then ECMAScript modules don't by default have to assume access to all the root-level-security functions and properties on process allowing access control to be added in future, as well as helping towards browser compatibility by making process an import.
ECMAScript modules share the same realm global, so there isn't a way to do this otherwise. In addition once users start running and writing and publishing ECMAScript modules in Node.js that assume the process and Buffer globals exist, it becomes very difficult to change this then.
For these reasons I think it is quite important to land this with the ECMAScript modules implementation in Node.js to provide these properties, or risk never being able to provide them at all in Node.js due to ecosystem compatibility constraints.
Related
Does the node interpreter look for core modules (let's say "fs") within the node binary? If yes, are these modules packaged as js files. Are the core modules that are referenced within our code converted to c/c++ code first and then executed? For example, I see a method in the _tls_common.js (https://github.com/nodejs/node/blob/master/lib/_tls_common.js) file called "loadPKCS12" and the only place that I see this method being referenced/defined is within the "node_crypto.cc" file (https://github.com/nodejs/node/blob/master/src/node_crypto.cc). So how does node link a method in javascript with the one defined in the c/c++ file?
here is the extract from the _tls_common.js file that makes use of the "loadPKCS12" method:
if (passphrase) {
c.context.loadPKCS12(buf, toBuf(passphrase));
} else {
c.context.loadPKCS12(buf);
}
}
} else {
const buf = toBuf(options.pfx);
const passphrase = options.passphrase;
if (passphrase) {
c.context.loadPKCS12(buf, toBuf(passphrase));
} else {
c.context.loadPKCS12(buf);
There are two different (but seemingly related) questions asked here. The first one is: "How the core modules work?". Second one being "How does NodeJS let c++ code get referenced and executed in JavaScript?". Let's take them one by one.
How the core modules work?
The core modules are packaged with NodeJS binary. And, while they are packaged with the binary, they are not converted to c++ code before packaging. The internal modules are loaded into memory during bootstrap of the node process. When a program executes, lets say require('fs'), the require function simply returns the already loaded module from cache. The actual loading of the internal module obviously happens in c++ code.
How does NodeJS let c++ code get referenced in JS?
This ability comes partly from V8 engine which exposes the ability to create and manage JS constructs in C++, and partly from NodeJS / LibUV which create a wrapper on top of V8 to provide the execution environment. The documentation about such node modules can be accessed here. As the documentation states, these c++ modules can be used in JS file by requiring them, like any other ordinary JS module.
Your example for use of c++ function in JS (loadPKCS12), however, is more special case of internal c++ functionality of NodeJS. loadPKCS12 is called on a object of SecureContext imported from crypto c++ module. If you follow the link to SecureContext import in _tls_common.js above, you will see that the crypto is not loaded using require(), instead a special (global) method internalBinding is used to obtain the reference. At the last line in node_crypto.cc file, initializer for internal module crypto is registered. Following the chain of initialization, node::crypto::Initialize calls node::crypto::SecureContext::Initialize which creates a function template, assigns the appropriate prototype methods and exports it on target. Eventually these exported functionalities from C++ world are imported and used in JS-World using internalBinding.
We're building a react/flux application using nodejs/webpack and therefore all of our new code is written in commonjs modules.
There are a few isolated cases where we need to access an object from one of our commonjs modules in legacy code. Is there any way to do this...and yes, I assume this is gross. We're eventually going to migrate all of our code to commonjs modules, but this is a stop-gap.
In the legacy JS code (non using CommonJS), create a function in the global namespace:
/* Legacy code NOT using CommonJS */
function getCommonJSObject(obj) {
...
}
In the CommonJS code you can test for this global function and if it exists, call it, passing whichever object you need access to.
/* CommonJS object */
if(window['getCommonJSObject']){
window.getCommonJSObject(obj);
}
By doing this we can still follow our flux pattern from legacy javascript code in our app. If there's a better way to do this, let us know. But this is a handy workaround.
I'm working on the project that uses require.js and I was faced with a simple question about declaring libraries as dependencies within every single modules. I saw many examples with require.js and in every of them libraries like Backbone, jquery, underscore etc was declared in every module. And it would be fine if your application consists of only couple of modules. But if your application has 40-50 modules it becomes a bit tedious to define jquery every single time.
On the other hand we can load all libraries into the require() this way:
// Just an example
require.config({
paths: {
jquery: '../bower_components/jquery/jquery',
underscore: '../bower_components/underscore/underscore',
backbone: '../bower_components/backbone/backbone'
}
});
require(['jquery', 'underscore', 'backbone'], function () {
require([
'app',
'bootstrap'
], function (App) {
var app = new App();
document.body.appendChild(app.el);
});
});
And then we can use Backbone, '_' and '$' into the App without declaring them as dependencies.
So I'm trying to understand why we should define them every time. Is it just a strict adherence to the Module pattern or something else?
Thanks.
Having your libraries passed in as arguments to your require factory function is a form of dependency injection. Now, depending on exactly how AMD-friendly your library dependencies are, you may even be able to use several different versions of the library at once in an application. Writing your modules in this way with explicit library dependencies can allow you future flexibility of mixing modules which require jQuery 1.8 and jQuery 2.0, for instance. A different jQuery version can be passed to each module.
When you don’t define all your dependencies explicitly every time, then you’re relying on the global variable (window.$ or window._) created as a side effect of loading the library. In that case, all the modules have to depend on the same version of the library.
Here's an example
$ cat main.js
App = {
version : 1.1
};
require('./mymod.js');
$ cat mymod.js
console.log(App.version);
$ node main.js
1.1
Note how I declared App in main.js without var. This allowed me to access App from mymod.js without having to call require. If I declare App with a var, this won't work.
I want to understand why this happens? Is it the intended behaviour for node.js or a bug? Is this behavior consistent with ECMAScript or CommonJS standards?
This trick gives a powerful mechanism to circumvent the require module system of node.js. In every file define your objects and add them to the top level App namespace. Your code in other files will be automatically have access to those objects. Did I miss something?
If you assign a variable without using var, it is automatically a global variable. That's just the way JavaScript works. If you put 'use strict'; (quotes required) at the top of your js file, this becomes an error instead.
All has to do with local scope vs global scope.
You can even do this (which is much neater):
app.js:
exports = {
version : 1.1
};
main.js:
var App = require('./app.js');
console.log(App.version);
Defining a variable without a preceding var will place it into the global namespace which is visible to all of your JavaScript code.
While this may seem a useful feature, it is generally considered bad practice to "pollute" the global namespace and can lead to subtle, hard-to-locate bugs when two non-related files both rely upon or define variables with the same name.
In nodeJS environment there is a global scope referenced by 'global' , just like the way we have 'window' in browser environments. In effect every javascript host enviroments always start with creating a global object.
When require('main.js') is executed, there is this following function that is created and executed against the global scope 'global'.
(function(exports,...){
//content of main.js
App = {
version : 1.1
};
})(module.exports,..);
When the above function is executed and since there is no var declaration for App , a property with name 'App' is created on global object and assigned the value.This behavior is according to ECMA spec.
That is how the App gets into global scope and is accessible across all modules.
require has been introduced to standardize development of modules that can be ported and used.
If I have a module that requires an application namespace, e.g.:
define(["app"], function(App){
[...]
});
... and the namespace requires libraries used by all of my modules, e.g.:
define(["jquery", "underscore", "backbone"], function($, _, Backbone){
[...]
});
... then all of my modules have access to the libraries required by the namespace, i.e. I can use $, _, and Backbone.
I like this behavior because I can avoid being repetitious, but I suspect that I'm cheating somehow, and that I should require libraries in each module.
Can anyone set me straight here?
Yeah, that's kinda hacky. You only have access to jQuery, underscore and backbone because they're also defined onto the global scope. Backbone and undersocre aren't real AMD module, they have to use a shim config. jQuery declare himself on the global scope and as an AMD module so it works everywhere.
So, yes it work like that, but it's not optimal. Real AMD module (non-shimmed) won't work this way as they need to be passed in the define functions arguments, and you won't be able to pull only one module to test it in a separate environment, etc. This way, you cannot load different versions of a scripts to work with different module/app section/page.
The goal of AMD is to bring modularity to your code so every module declare it's own dependencies and will work out of the box it without relying on the global scope (which is a good thing to prevent name collision and conflict with third party/other dev working on the same project).
If you find it's redundant to redeclare everytime your base dependencie, create a boilerplate file that you just copy/paste when creating another module (it's better than nothing). And, maybe some command line tools can build AMD module wrapper for you.
Soooo, yes it works, but it won't scale if your project ever get bigger or need to be updated pieces by pieces.
Hope this help !
good news for the above answer: underscore 1.6.0 now is wrapped as a amd module :)
see "lib.chartjs" for exporting globals in not amd wrapped "shimmed" javascript libraries
requirejs.config({
paths: {
"moment": "PATH_TO/js/moment/2.5.0/moment.min",
"underscore": "PATH_TO/js/underscore/1.6.0/underscore",
"jquery": "PATH_TO/js/jquery/1.10.2/jquery.min",
"lib.jssignals": "PATH_TO/js/jssignals/1.0.0-268/signals.min",
// WORKAROUND : jQuery plugins + shims
"lib.jquery.address": "PATH_TO/js/jqueryaddress/1.6/jquery-address"
"lib.jquery.bootstrap":"PATH_TO/js/bootstrap/3.0.3/bootstrap",
"lib.chartjs": "PATH_TO/js/chartjs/0.2/Chart.min",
},
shim: {
"lib.jquery.address": {deps: ["jquery"]},
"lib.jquery.bootstrap": {deps: ["jquery"]},
"lib.chartjs": {deps: ["jquery"], exports: "Chart"},
}
});