What determines the order of `require` calls in babel-transpiled scripts? - node.js

So, my workflow up to this point was to put
import "babel-polyfill";
when using features like async/await to ask babel to include the regenerator runtime in the transpilation.
I see the the following problems for users requiring my module:
The user is in an ES2015 environment and transpiles his code with babel-polyfill, too. Since babel-polyfill can only be required once, he will not be able to use my module at all.
If I thus choose not to include babel-polyfill, babel doesn't know that the module does require babel-polyfill and won't respect that in the generated require order (at least that's what I think happens).
I've recently created an npm module that does not come with babel-polyfill, but requires the user to include babel-polyfill before calling require on my npm module, since it uses async and await.
Thus, in my current project, I'd like to use my module like so in index.js:
import "babel-polyfill";
import Server from "./Server";
import foo from "bar";
import baz from "qux";
where Server is a class that extends my module that requires babel-polyfill.
However, the transpilation of index.js starts like this:
!function(e, r) {
if ("function" == typeof define && define.amd)
define(["bar", "qux", "./Server", "babel-polyfill"], r);
else if ("undefined" != typeof exports)
r(require("bar"), require("qux"), require("./Server"), require("babel-polyfill"));
// etc.
}();
Here, I can clearly see that ./Server is required before babel-polyfill, although my ES2015 import syntax asks for the opposite. In fact, the entire order is mixed up.
That's why I'm getting the error:
ReferenceError: regeneratorRuntime is not defined
How can I tell babel to respect the order in my source?

From what I can tell, you can't tell Babel to respect the order of your source - it will always hoist the imports and evaluate everything else afterwards. The only way seems to be switching to require when you want to ensure evaluation of some code (usually global assignments) first. As per my answer here.
Numerous issues raised (and closed/rejected) against Babel relating to this.

Related

Create a TypeScript library with optional dependencies resolved by application

I've written a library published to a private npm repo which is used by my applications.
This library contains utilities and has dependencies to other libraries, as an example let's choose #aws-sdk/client-lambda.
Some of my applications use only some of the utilities and don't need the dependencies to the external libraries, while some applications use all of the utilities.
To avoid having all applications getting a lot of indirect dependencies they don't need, I tried declaring the dependencies as peerDependencies and having the applications resolve the ones they need. It works well to publish the package, and to use it from applications who declare all of the peerDependencies as their own local dependencies, but applications failing to declare one of the dependencies get build errors when the included .d.ts files of the library are imported in application code:
error TS2307: Cannot find module '#aws-sdk/client-kms' or its corresponding type declarations.
Is it possible to resolve this situation so that my library can contain many different utils but the applications may "cherry-pick" the dependencies they need to fulfill the requirements of those utilities in runtime?
Do I have to use dynamic imports to do this or is there another way?
I tried using #ts-ignore in the library code, and it was propagated to the d.ts file imported by the applications, but it did not help.
Setup:
my-library
package.json:
peerDependencies: {
"#aws-sdk/client-lambda": "^3.27.0"
}
foo.ts:
import {Lambda} from '#aws-sdk/client-lambda';
export function foo(lambda: Lambda): void {
...
}
bar.ts:
export function bar(): void {
...
}
index.ts:
export * from './foo';
export * from './bar';
my-application1 - works fine
package.json:
dependencies: {
"my-library": "1.0.0",
"#aws-sdk/client-lambda": "^3.27.0"
}
test.ts:
import {foo} from 'my-library';
foo();
my-application2 - does not compile
package.json:
dependencies: {
"my-library": ...
}
test:ts:
import {bar} from 'my-library';
bar();
I found two ways of dealing with this:
1. Only use dynamic imports for the optional dependencies
If you make sure that types exported by the root file of the package only include types and interfaces and not classes etc, the transpiled JS will not contain any require statement to the optional library. Then use dynamic imports to import the optional library from a function so that they are required only when the client explicitly uses those parts of the library.
In the case of #aws-sdk/client-lambda, which was one of my optional dependencies, I wanted to expose function that could take an instance of a Lambda object or create one itself:
import {Lambda} from '#aws-sdk/client-lambda';
export function foo(options: {lambda?: Lambda}) {
if (!lambda) {
lambda = new Lambda({ ... });
}
...
}
Since Lambda is a class, it will be part of the transpiled JS as a require statement, so this does not work as an optional dependency. So I had to 1) make that import dynamic and 2) define an interface to be used in place of Lambda in my function's arguments to get rid of the require statement on the package's root path. Unfortunately in this particular case, the AWS SDK does not offer any type or interface which the class implements, so I had to come up with a minimal type such as
export interface AwsClient {
config: {
apiVersion: string;
}
}
... but of course, lacking a type ot represent the Lambda class, you might even resort to any.
Then comes the dynamic import part:
export async function foo(options: {lambda?: AwsClient}) {
if (!lambda) {
const {Lambda} = await import('#aws-sdk/client-lambda');
lambda = new Lambda({ ... });
}
...
}
With this code, there is no longer any require('#aws-sdk/client-lambda') on the root path of the package, only within the foo function. Only clients calling the foo function will have to have the dependency in their
node_modules.
As you can see, a side-effect of this is that every function using the optional library must be async since dynamic imports return promises. In my case this worked out, but it may complicate things. In one case I had a non-async function (such as a class constructor) needing an optional library, so I had no choice but to cache the promised import and resolve it later when used from an async member function, or do a lazy import when needed. This has the potential of cluttering code badly ...
So, to summarize:
Make sure any code that imports code from the optional library is put inside functions that the client wanting to use that functionality calls
It's OK to have imports of types from the optional library in the root of your package as it's stripped out when transpiled
If needed, defined substitute types to act as place-holders for any class arguments (as classes are both types and code!)
Transpile and investigate the resulting JS to see if you have any require statement for the optional library in the root, if so, you've missed something.
Note that if using webpack etc, using dynamic imports can be tricky as well. If the import paths are constants, it usually works, but building the path dynamically (await import('#aws-sdk/' + clientName)) will usually not unless you give webpack hints. This had me puzzled for a while since I wrote a wrapper in front of my optional AWS dependencies, which ended up not working at all for this reason.
2. Put the files using the optional dependencies in .ts files not exported by the root file of the package (i.e., index.ts).
This means that clients wanting to use the optional functionality must
import those files by sub-path, such as:
import {OptionalStuff} from 'my-library/dist/optional;
... which is obviously less than ideal.
in my case, the typescript IDE in vscode fails to import the optional type, so im using the relative import path
// fix: Cannot find module 'windows-process-tree' or its corresponding type declarations
//import type * as WindowsProcessTree from 'windows-process-tree';
import type * as WindowsProcessTree from '../../../../../node_modules/#types/windows-process-tree';
// global variable
let windowsProcessTree: typeof WindowsProcessTree;
if (true) { // some condition
windowsProcessTree = await import('windows-process-tree');
windowsProcessTree.getProcessTree(rootProcessId, tree => {
// ...
});
}
package.json
{
"devDependencies": {
"#types/windows-process-tree": "^0.2.0",
},
"optionalDependencies": {
"windows-process-tree": "^0.3.4"
}
}
based on vscode/src/vs/platform/terminal/node/windowsShellHelper.ts

mocking es6 modules in unit test

Suppose we have a file(source.js) to test:
// source.js
import x from './x';
export default () => x();
And the unit test code is very simple:
// test.js
import test from 'ava';
import source from './source'
test("OK", t => {
source();
t.pass();
});
But this is the tricky thing. The file "./x" does not exist in the test environment. Since it is a unit test, we do not need "./x". We can mock the function "x()" anyhow(using sinon or something else). But the unit test tool, ava, keeps showing "Error: Cannot find module './x'";
Is there any way to run the unit test without the file "./x"?
To do this, you'd need to override the import process itself. There are libraries for doing something similar-- overriding the require function with proxyquire, for example-- but these force you to invoke their custom function to import the module under test. In other words, you'd need to abandon using ES6 module syntax to use them.
You should also note that ES modules are only supported experimentally (and relatively recently) in Node. If you're transpiling with Babel, you're not actually using ES modules. You're using the syntax, sure, but Babel transpiles them to CommonJS "equivalents", complete with require calls and module.exports assignments.
As such, if you're using Babel, you can probably proxyquire to import modules under test in your test files, even if you're using ES module syntax in those modules. This will break, though, if you ever switch away from Babel. :\
My personal recommendation would be to avoid directly exporting anything you might need to stub. So functions like x should be in a static module imported like import foo from './foo' and invoked like foo.x(). Then, you can easily stub it with sinon.stub(foo, 'x'). Of course, the './foo' file will have to exist for this still, but what it really comes down to is how hardcore you want to be about TDD practices versus how much complexity you're willing to introduce to your mocking/stubbing process. I prefer relaxing the former to avoid the latter, but in the end it's up to you.
If you are using Jest, it can be achieved easily by mocking the import using the inbuilt mock method of jest.
Please refer the link below to find more about ES6 import mocks
https://jestjs.io/docs/en/es6-class-mocks
If you are using jest with JavaScript this is very simple:
use jest.mock('MODULE_NAME')
If it is a node module or your own module, no problem jest will automatically mock that module.

Almond.js and define() method. Throws: Uncaught Error: incorrect module build, no module name

The story is:
In our firm we're slowly trying to adjust our huge application to use dynamic AMD module loader. We can't do it all at once, this is why we do it in steps: firstly we wish to rewrite all javascript into typescript using AMD requires and 'fake' modularity using almond.js. After we rewrite everything, we'll switch to real dynamic module loader.
Here comes the problem
When we include almond on the page, following error is thrown:
almond.js:414 Uncaught Error: See almond README: incorrect module build, no module name
at define (almond.js:414)
at plotly.js:7
at plotly.js:7
It comes from several libraries, not just plotly. I managed to track it down, and it turns out that almond introduces define() which takes 3 required parameters, while plotly (and some other libraries) calls define() using one or two of them:
Plotly:
if (typeof define==="function" && define.amd ) {
define([],f)
}
Almond:
define = function (name, deps, callback) {
if (typeof name !== 'string') {
throw new Error('See almond README: incorrect module build, no module name');
}
(...)
Question:
Do you have any idea how to approach to solve this problem? We can load almond.js after Plotly.js, but we'd like to find better solution and use Plotly in conjuction with almond. Is that even possible?
As Almond's README states:
Only useful for built/bundled AMD modules, does not do dynamic loading.
Emphasis added. The form of define without a string giving the module name as the first argument is used in files that have not been through a building or bundling process. An AMD bundler will take the define call and add the module name as the first parameter when it does the bundling. The files you refer to have not been included in a bundle, so they lack the module name.
Solution: use an AMD bundler, like r.js to bundle your modules into a single bundle that Almond will use.

Why does tsc give so cryptic errors when leaking transitive implementation types?

I have a node module A depending on another node module B, both written in Typescript. Module B returns Promises to A and have chosen bluebird as the Promise implementation. B of course have typings for bluebird.
However, if A doesn't have typings of bluebird (which it probably shouldn't in my case), I get errors like:
~/d/p/ensime-vscode ❯❯❯ tsc -p . ⏎ master ✭ ✱
node_modules/ensime-client/**/file-utils.d.ts(1,26):
error TS2307: Cannot find module 'bluebird'.
It took me a while to realize that this was due to me leaking the concrete Promise type of bluebird. Changing all the public return types to PromiseLike made the errors go away.
My question is, is there a way to detect these earlier on my independent module B? I recall sometimes getting errors when modules leaked types that wasn't public before, but in this case module B built just fine. It's all very blurry to me this thing since I'm very new to Typescript. I guess Typescript is a different beast compared to what I'm used to.
Also, isn't it possible for tsc to emit better error messages for these cases?
Small update:
When I'm "leaking" a type that is locally defined this is caught directly in B:
export interface CompletionsResponse extends Typehinted {
completions: [Completion]
}
interface Completion {
}
[ts]
Property 'completions' of exported interface has or is using private name 'Completion'.
interface Completion
I would like to be able to catch this kind of thing directly if I'm exposing something from a dependency like 'bluebird'.Promise as well. It was never my intention to expose 'bluebird' as a transitive dependency, and I honestly don't even know how to do that with typings? So as this builds just fine, what has happened is that 'bluebird' silently became a typings "peer dependency" in npm toungue.
If a package has a type dependency on another module, this dependency should be included in the package's typings.json, with typings install bluebird --save.
Your type declaration of module B should look something like this type declaration for redux-persist (but then inside your actual project). It depends on redux for several types. Therefore, there is a dependency listed in the typings.json.
Concerning typescript error messages, they're kind of a pain. ¯\_(ツ)_/¯

How to import node module in TypeScript without type definitions?

When I try to import node.js module in TypeScript like this:
import co = require('co');
import co from 'co';
without providing type definitions, both lines reports same error:
error TS2307: Cannot find module 'co'.
How to import it correctly?
The trick is to use purely JavaScript notation:
const co = require('co');
Your options are to either import it outside TypeScript's module system (by calling a module API like RequireJS or Node directly by hand) so that it doesn't try to validate it, or to add a type definition so that you can use the module system and have it validate correctly. You can stub the type definition though, so this can be very low effort.
Using Node (CommonJS) imports directly:
// Note there's no 'import' statement here.
var loadedModule: any = require('module-name');
// Now use your module however you'd like.
Using RequireJS directly:
define(["module-name"], function (loadedModule: any) {
// Use loadedModule however you'd like
});
Be aware that in either of these cases this may mix weirdly with using real normal TypeScript module imports in the same file (you can end up with two layers of module definition, especially on the RequireJS side, as TypeScript tries to manage modules you're also managing by hand). I'd recommend either using just this approach, or using real type definitions.
Stubbing type definitions:
Getting proper type definitions would be best, and if those are available or you have time to write them yourself you should definitely should.
If not though, you can just give your whole module the any type, and put your module into the module system without having to actually type it:
declare module 'module-name' {
export = <any> {};
}
This should allow you to import module-name and have TypeScript know what you're talking about. You'll still need to ensure that importing module-name does actually load it successfully at runtime with whatever module system you're using, or it will compile but then fail to actually run.
I got an error when I used the "Stubbing type definitions" approach in Tim Perry's answer: error TS2497: Module ''module-name'' resolves to a non-module entity and cannot be imported using this construct.
The solution was to rework the stub .d.ts file slightly:
declare module 'module-name' {
const x: any;
export = x;
}
And then you can import via:
import * as moduleName from 'module-name';
Creating your own stub file lowers the barrier to writing out real declarations as you need them.
Just import the module the following way:
import 'co';

Resources