Overcoming RequireJS's '.js' extension adding in cache/proxy environments - requirejs

RequireJS automatically adds a '.js' extension to each module name, and a known hack is to add a ? at the end, as detailed here and here.
The problem is, that adding a question mark means cache and proxy servers will usually conclude these are dynamic files and not cache them. I guess it's configurable if I control these cache/proxy servers but between my server and the client there is potentially a downstream of proxy servers in between which is beyond my control.
So, is there any other way to cause RequireJS to not add the .js extension other than putting a question mark?

There is no way to configure require.js to achieve what you want. But you can override require.load method and make url preprocessing
require.load = (function(load) {
var stripJSRegex = /\.js$/;
return function(context, moduleName, url) {
load.call(this, context, moduleName, url.replace(stripJSRegex, ''));
}
})(require.load);

Related

Setting debug strings dynamically in Loopback 2.0

I use debug strings for debugging Loopback 2.0 application. Loopback documentation says:
The LoopBack framework has a number of built-in debug strings to help
with debugging. Specify a string on the command-line via an
environment variable as follows:
MacOS and Linux
$ DEBUG=<pattern>[,<pattern>...] node .
Is it possible to change patterns dynamically in runtime? Or is it possible to use environment-specific configuration?
Before I get deeper note that this debug logging facility uses visionmedia's debug module which handles almost all of the logic.
Is it possible to change patterns dynamically in runtime?
Well before any module is loaded, safest and best way I believe is to just manipulate the environmental variables:
if (process.env.NODE_ENV === 'development') {
process.env.DEBUG = process.env.DEBUG + ',loopback:*';
}
Another way would be to load debug and use it's .enable method:
require('debug').enable('loopabck:*');
But note that it only works if you do this before Loopback is required since it only allows changes before it's instances are created, which is in this case before loopback is loaded. Another thing is that, there might be multiple debug modules installed depending on the dependencies and your package manager(npm#3, npm#2 and yarn behave differently). debug might be in your node_modules directory, or it might be in loopback each module, node_modules directory. So make sure you require all instances of it and enable, if you want to do it this way.
Now if don't want to do it on the startup, well API doesn't allow changes in the runtime. You can view the discussion regarding this here. Though there are some dirty ways to go around it, but these might possibly break in the future so be careful.
Firstly, there's a module called hot-debug which supposedly makes require('debug').enable work on previously created instances also, but when I tried it, it didn't work perfectly and it was buggy, but it's possible it might work fine for you.
If that doesn't work for you another way is to override require('debug').log method. If this is defined, debug will call this method instead of console.log with the formatted the arguments. You can set DEBUG=* and then filter it yourself:
require('debug').log = function (string) {
if (string.contains('loopback:security')) {
console.log(string);
}
};
This way will be slow in production though as all the debug output will be formatted before being filtered even though nothing might be outputted to console.
Another thing to override the require('debug').init method. This is called everytime a new debug instance is created. Since every debug instance uses an enabled property to check if it's enabled we can toggle that.
const debug = require('debug');
const { init } = debug;
const instances = [];
debug.init = function(debugInstance) {
init(debugInstance);
instances.push(debugInstance);
};
// You can call this function later to enable a given namespace like loopback.security.acl
function enableNamespace(namespace) {
instances.forEach(instance => {
instance.enabled = instance.namespace === namespace;
});
}
Though there's a lot of improvement can be done on this, but you get the idea.
I can change debug namespace dynamically with the hot-debug module.
In my app, I've just created a function for this :
require('hot-debug')
const debug = require('debug')
function forceDebugNamespaces (namespaces) {
debug.enable(namespaces)
}
// Usage
forceDebugNamespaces('*,-express:*,-nodemon:*,-nodemon')
In my case, I have a config file which allow me to set process.env.DEBUG but I needed to find a way to update debug namespaces without restarting my app (the config file is watched for changed by my app).

RequireJS Path With JS Extension Not Resolving

I am using a library that (very selfishly, IMHO) assumes that the baseUrl will point to the company's CDN:
baseUrl: "[http protocol slash slash]cdn.wijmo.com/amd-js/"
At first I thought that I would just copy the contents of the above Url to my own folder (such as /scripts/wijmo/amd-js), but that doesn't work because the good folks at Wijmo hardcoded path references in their AMD define statements, such as this:
define(["./wijmo.widget"], function () { ... });
What the above means (if I understand things properly) is that if you have any other non-Wijmo AMD modules then you must either:
(a) place them under the amd-js path, perhaps in a sub-folder named "myScripts"
(b) use hard-coded RequireJS path references to your own AMDs, like this:
paths: {
"myAMD_1": "http://www.example.com/scripts/myScripts/myAMD_1",
"myAMD_2": "/scripts/myScripts/myAMD_2.js"
}
(a) works, but it means that the baseUrl cannot point to the Wijmo CDN because I don't have access to the Wijmo CDN site so I must move the files published under amd-js to my own server.
(b) sort of work, and here is my problem: If I use syntax myAMD_1 then all is well. But that doesn't let me test on my local development machine, which uses localhost. (I don't want to get into detecting which server I'm running on and customize the paths value... I want the path to remain the same before and after I publish to my http server.)
Now, according to the RequireJS documentation:
There may be times when you do want to reference a script directly and not conform to the "baseUrl + paths" rules for finding it. If a module ID has one of the following characteristics, the ID will not be passed through the "baseUrl + paths" configuration, and just be treated like a regular URL that is relative to the document:
* Ends in ".js".
* Starts with a "/".
* Contains an URL protocol, like "http:" or "https:".
When I try to end (terminate) my path reference with .js (as shown in AMD_2 above), RequireJS doesn't find my AMD and because it ends up looking for myAMD_2.js.js (notice the two .js suffixes). So it looks like RequireJS is no longer honoring what the docs say it employs as a path-resolution algorithm. With the .js suffix not working properly, I can't easily fix references to my own AMDs because I don't know for sure what server name or physical path structure they'll be published to--I really want to use relative paths.
Finally, I don't want to change Wijmo's AMD modules not only because they're are dozens of them, but also because I would need to re-apply my customizations each time they issue a Wijmo update.
So if my baseUrl has to point to a hard-coded Wijmo path then how can I use my own AMDs without placing them in a subfolder under the Wijmo path and without making any fixed-path or Url assumptions about where my own AMDs are published?
I can suggest a couple of approaches to consider here--both have some drawbacks, but can work.
The first approach is to provide a path for each and every Wijmo module that needs to be loaded. This will work, but you have touched on the obvious drawbacks of this approach in the description of the question: Wijmo has many modules that will need to be referenced, and maintaining the module list across updates in the future may be problematic.
If you can live with those drawbacks, here is what the RequireJS config would look like:
require.config({
paths: {
'wijmo.wijgrid': 'http://cdn.wijmo.com/amd-js/wijmo.wijgrid',
'wijmo.widget': 'http://cdn.wijmo.com/amd-js/wijmo.widget',
'wijmo.wijutil': 'http://cdn.wijmo.com/amd-js/wijmo.wijutil',
// ... List all relevant Wijmo modules here
}
});
require(['wijmo.wijgrid'], function() { /* ... */ });
The second approach is to initially configure the RequireJS baseUrl to load the Wijmo modules. Then once Wijmo modules have been loaded, re-configure RequireJS to be able to load your local app modules. The drawback of this approach is that all the Wijmo modules will need to be loaded up front, so you lose the ability to require Wijmo modules as needed within your own modules. This drawback will need to be balanced against the nastiness of listing out explicit paths for all the Wijmo modules as done in the first approach.
For example:
require.config({
baseUrl: 'http://cdn.wijmo.com/amd-js',
paths: {
// ... List minimal modules such as Jquery and Globalize as per Wijmo documentation
}
});
require(['wijmo.wijgrid'], function() {
require.config({
baseUrl: '.'
});
require(['main'], function() {
/* ... */
});
});

RequireJS Dynamic Paths Replacement

I have a requirejs module which is used as a wrapper to an API that comes from a different JS file:
apiWrapper.js
define([], function () {
return {
funcA: apiFuncA,
funcB: apiFuncB
};
});
It works fine but now I have some new use cases where I need to replace the implementation, e.g. instead of apiFuncA invoke my own function. But I don't want to touch other places in my code, where I call the functions, like apiWrapper.funcA(param).
I can do something like the following:
define([], function () {
return {
funcA: function(){
if(regularUseCase){
return apiFuncA(arguments);
} else {
return (function myFuncAImplementation(params){
//my code, instead of the external API
})(arguments);
}
},
funcB: apiFuncB
};
});
But I feel like it doesn't look nice. What's a more elegant alternative? Is there a way to replace the module (apiWrapper) dynamically? Currently it's defined in my require.config paths definition. Can this path definition be changed at runtime so that I'll use a different file as a wrapper?
Well, first of all, if you use Require.js, you probably want to build it before production. As so, it is important you don't update paths dynamically at runtime or depends on runtime variables to defines path as this will prevent you from running r.js successfully.
There's a lot of tools (requirejs plugins) out there that can help you dynamically change the path to a module or conditionnaly load a dependency.
First, you could use require.replace that allow you to change parts (or all) of a module URL depending on a check you made without breaking the build.
If you're looking for polyfilling, there's requirejs feature
And there's a lot more listed here: https://github.com/jrburke/requirejs/wiki/Plugins

On-demand require()

Say I create a library in ./libname which contains one main file: main.js and multiple optional library files which are occasionally used with the main object: a.js and b.js.
I create index.js file with the following:
exports.MainClass = require('main.js').MainClass; // shortcut
exports.a = require('a');
exports.b = require('b');
And now I can use the library as follows:
var lib = require('./libname');
lib.MainClass;
lib.a.Something; // Here I need the optional utility object
lib.b.SomeOtherThing;
However, that means, I load 'a.js' and 'b.js' always and not when I really need them.
Sure I can manually load the optional modules with require('./libname/a.js'), but then I lose the pretty lib.a dot-notation :)
Is there a way to implement on-demand loading with some kind of index file? Maybe, some package.json magic can play here well?
This may possible if you called the "MainClass" to dynamically load the additional modules on-demand. But I suspect this will also mean an adjustment in the api to access the module.
I am guess your motivation is to "avoid" the extra processing used by "non-required modules".
But remember Node is single threaded so the memory footprint of loading a module is not per-connection, it's per-process. Loading a module is a one-off to get it into memory.
In other words the modules are only ever loaded when you start your server not every-time someone makes a request.
I think you looking at this from client-side programming where it's advantages to load scripts when they are required to save on both processing and or http requests.
On the server the most you will be saving is few extra bites in memory.
Looks like the only way is to use getters. In short, like this:
exports = {
MainClass : require('main.js').MainClass,
get a(){ return require('./a.js'); },
get b(){ return require('./a.js'); }
}

Is it secure to use a controller, module or action name to set include paths?

I want to set up include paths (and other paths, like view script paths) based on the module being accessed. Is this safe? If not, how could I safely set up include paths dynamically? I'm doing something like the code below (this is from a controller plugin.)
public function dispatchLoopStartup(Zend_Controller_Request_Abstract $request) {
$modName = $request->getModuleName();
$modulePath = APP_PATH.'/modules/'.$modName.'/classes';
set_include_path(get_include_path().PATH_SEPARATOR.$modulePath);
}
I'm not sure whether it is safe or not, but it doesn't sound like the best practice. What if someone entered a module name like ../admin/? You should sanitize the module name before using it.
It's fine as long as you sanitize your variables before using them, but it won't be very performant. Fiddling with include paths at runtime causes a serious impact performance.
You're trying to load models/helpers per module? You should look at Zend_Application:
Zend_Application provides a bootstrapping facility for applications which provides reusable resources, common- and module-based bootstrap classes and dependency checking. It also takes care of setting up the PHP environment and introduces autoloading by default.
Emphasis by me

Resources