vite: use "keep-names" esbuild flag for production build - vite

one of our third party libraries requires us to preserve specific function names. in webpack we did that with terser.keep_fnames. esbuild has https://esbuild.github.io/api/#keep-names so we'd like to use that but we cannot find how to enable this option for a vite production build.
according to the docs esbuild is used for minification. how do we enable this flag (or a comparable option)? note that we'd like to not use terser, as its much slower than esbuild.
there is an undocumented config.esbuild prop. that seems to be used in the current master code:
https://github.com/vitejs/vite/blob/f72fdc7c995db502ca89f0057cfc1fcd6660212f/packages/vite/src/node/plugins/esbuild.ts#L352
but when i tried adding config.esbuild.keepNames to the config object (as object fields of course) it didnt do anything.

Related

How can I make vscode assume a node.js context, so that it doesn't assume `fetch` exists?

By default, when editing a JavaScript file in VSCode, it will assume that the fetch function and related types exist in the current context. This makes sense for JavaScript designed to run in the browser, but when running on node.js the fetch function does not exist unless it is installed via node-fetch. I find that in this context, VSCode is misleading, as it will not highlight an error when you trying calling the fetch function, or access other types such as Request and Response, even though they do not exist unless you have node-fetch installed.
How can I configure vscode to assume a node.js context, and therefore assume that fetch does not exist unless I explicitly import it from node-fetch?
Why web types are there by default
From the docs for tsconfig.json compilerOptions.lib:
TypeScript includes a default set of type definitions for built-in JS APIs (like Math), as well as type definitions for things found in browser environments (like document).
How to change the defaults
Create a tsconfig.json or jsconfig.json, and set the compilerOptions.lib array to not contain "DOM", which means that lib.dom.d.ts (the "DOM standard library" type definitions file that comes with TypeScript) will not be assumed. You should specify which ECMA Script standard you want to write your source code in.
The config file also has fields to control what files it takes effect on: files, include, and exclude. If you specify none of them, include will default to **, which matches everything beside and recursively under subdirectories beside the config file.
Having to create this file could be seen as annoying if you just want to write a single JS file (ie. now you have a whole config file just for one source file!). I don't know if there are alternatives that are more convenient for such a use case. If anyone knows, please edit this answer.
I looked briefly into TypeScript triple-slash directives, which allow specifying things on a per-file basis, but I think you can only add things (ie. I don't think you can use them to remove a lib).
At the time of this writing, there are VS Code settings that can be applied at the user-settings scope that affect settings for implicit projects (JS/TS files which don't have a project config file) (js/ts.implicitProjectConfig.*), but none of them are for setting the compileOptions.lib field, and my gut says it's probably not going to happen (but don't quote me on that).
You probably also want types for the Node API
Use npm to install a version of #types/node. Make sure the major version number of the version you install matches the major version number of the version of Node JS you want to script to be runnable on.
Fun irrelevant facts to this question
Continuing on the point about VS Code's user-settings for implicit projects, VS Code puts some defaults in effect (on top of those that TypeScript itself does) if no project is detected. You can poke through the code at github.dev/microsoft/vscode by doing "Find in Files", using extensions/typescript-language-features/**/* as the "files to include" field, and compilerOptions as the find query. compilerOptions.lib seems to not be something that VS Code touches in such a scenario.

Using type-checker ESLint rules with a Nx monorepo

I am trying to use the rule "#typescript-eslint/naming-convention".
My understanding is that this rule requires linting with type information, described here: https://typescript-eslint.io/docs/linting/type-linting/
When trying to use 'plugin:#typescript-eslint/recommended-requiring-type-checking' I then get the following error:
You have used a rule which requires parserServices to be generated. You must therefore provide a value for the "parserOptions.project" property for #typescript-eslint/parser.
However I cannot see how parserOptions.project this should work when using a Nx monorepo.
In the monorepo, there is a variety of different files created for each project. At the top level there is a tsconfig.base.json, then each project has a tsconfig.json, and its own .eslintrc.json. I have tried a variety of combinations (including some great advice here) but I haven't managed to get anything working yet.
If changes to individual project files are required, I could add some custom generator logic, but ideally I'd work with global rules.
Also searching GitHub doesn't bring up any example repositories either.
Q: May anyone advise me how to use TypeScript linter rules which require type information with a Nx monorepo?
So it looks like you can use
"parserOptions": {
"project": ["./tsconfig.base.json"]
},
with Nx and that seems to work at the top level, meaning that you shouldn't need to edit individual .eslintrc files for each project.
You do also need to just apply the "parserOptions" and the "extends": ["plugin:#typescript-eslint/recommended-requiring-type-checking"] within the "overrides" array, just for TypeScript files only, as described in this answer: "parserOptions.project" has been set for #typescript-eslint/parser.
This was actually enough to get me working, running nx lint from the command line... my problem was that I also have an issue with WebStorm settings on top of all this too, which was obscuring things further. It might be this open issue, https://youtrack.jetbrains.com/issue/WEB-47201, but that is really a completely different question... my original question is answered.

Why don't for..of loops over Iterables work when run within Jest?

I have the following TypeScript code:
const myMap = new Map([["name", 5]]);
for (const foo of myMap.values()) {
console.log(foo);
}
When I run this code in node (v8.12.0) directly, it works and prints out "5" to console.
If I run this exact same code in a Jest test, it never executes the contents of the for loop. It runs the for condition and then just skips past the loop, never actually enumerating the values.
Why is that? Is there something about the JS runtime used by Jest (isn't it node?) that doesn't support for..of over iterables?
Thanks!
After a very long investigation, I have gotten to the bottom of this. Important background information about the solution:
When targeting older ECMAscript versions like ES3 and ES5 with the TypeScript compiler, using for..of loops to iterate over Iterable collections is not supported by default. If you want to use for..of with Iterable, you have to either target something newer than ES5 or use the --downlevelIteration flag.
To use Jest with a TypeScript project, you use ts-jest. At least, I was. I think you can also use babel somehow but I think ts-jest is preferred.
When using ts-jest, by default it tries to use the tsconfig.json file that the project uses -- which, as far as I can tell, means the one that is next to the jest.config.js file you are using (or the current directory if you aren't specifying a jest.config.js file). If it cannot find a tsconfig.json file in the project directory, it uses the default TypeScript compiler options. The default compiler options cause the TypeScript compiler to target ES3 or ES5 (the ts-jest docs claim it defaults to ES3 but that ts-jest overrides the default in this case to be ES5). --downlevelIteration is not on by default.
With all this in mind, I was able to figure out that ts-jest was not able to find my project's tsconfig.json file and so it was running my tests using the default settings, which meant targeting ES5 and not allowing downlevelIteration, so all my for..of loops over Iterable didn't work. The reason it couldn't find my tsconfig.json file is because my jest.config.js file was in a different directory (higher up in the tree) and even though I was running jest from a directory with a tsconfig.json file, ts-jest was not looking in the "current" directory but was instead looking in the directory that I pointed jest at for my jest.config.js file, which did not contain a tsconfig.json.
My solution to this was to rework my directory structure a bit and leverage the tsConfig option of ts-jest to tell it where to find my tsconfig.json file, which made everything "magically" work since my tsconfig.json file targets es2018, which supports for..of iteration over Iterable.
One alternative solution I considered but quickly disregarded was feature of the aforementioned tsConfig setting to directly set the --downlevelIteration compiler option in the jest config. I chose not to do that because, while it would have fixed this specific problem, it would not have fixed the larger problem which is that my Jest tests were compiling my TypeScript with different flags than my production code! It just so happens that only current problem caused by this was my for..of loop misery.
A quick postscript: the way I eventually made headway on this issue is by stumbling across the diagnostics option in ts-jest. Once I set it to true, when I tried to run my tests, an error like this was displayed:
TypeScript diagnostics (customize using [jest-config].globals.ts-jest.diagnostics option): src/foo.ts:163:47 - error TS2569: Type 'Map<Guid, FooInfo>' is not an array type or a string type. Use compiler option '--downlevelIteration' to allow iterating of iterators.
It seems like TS compiler errors should be displayed (and cause test failures) regardless of whether or not ts-jest "diagnostics" are enabled but shrug.

In Flow NPM packages, what's the proper way to suppress issues so user apps don't see errors?

If you use something like $FlowIssue it's not guaranteed to be in everyone's .flowconfig file. If you declare a library interface, that seems to only work for the given project, not in other projects that import your package (even if you provide the .flowconfig and interface files in your NPM package).
Here's the code I'm trying to suppress errors for in apps that use my package:
// $FlowIssue
const isSSRTest = process.env.NODE_ENV === 'test' // $FlowIssue
&& typeof CONFIG !== 'undefined' && CONFIG.isSSR
CONFIG is a global that exists when tests are run by Jest.
I previously had an interface declaration for CONFIG, but that wasn't honored in user applications--perhaps I'm missing a mechanism to make that work?? With this solution, at least there is a good chance that user's have the $FlowIssue suppression comment. It's still not good enough though.
What's the idiomatic solution here for packages built with Flow?
Declaring a global variable
This is the way to declare a global variable:
declare var CONFIG: any;. Instead of any you could/should use the actual type.
Error Suppression
With flow v0.33 they introduced this change:
suppress_comment now defaults to matching // $FlowFixMe if there are
no suppress_comments listed in a .flowconfig
This means that there is a greater chance of your error being suppressed if you use $FlowFixMe.
Differences in .flowconfig between your library and your consumers' code are a real issue, and there is no way to make it so that your code can be dropped into any other project and be sure it will typecheck. On top of that, even if you have identical .flowconfigs, you may be running different versions of Flow. Your code may typecheck in one version, but not in another, so it may be the case that consumers of your library will be pinned to a specific version of Flow if they want to avoid getting errors reported from your library.
Worse, if one library type checks only in one version of Flow, and another type checks only in another version, there may be no version of Flow that a consumer can choose in order to avoid errors.
The only way to solve this generally is to write library definition files and publish them to flow-typed. Unfortunately, this is currently a manual process because there is not yet any tooling that can take a project and generate library definitions for it. In the mean time, simply copying your source files to have the .js.flow extension before publishing will work in some cases, but it is not a general solution.
See also https://stackoverflow.com/a/43852211/901387

How can I include additional modules in a NodeJS custom binary?

I am building a custom binary of NodeJS from the latest code base for an embedded system. I have a couple modules that I would like to ship as standard with the binary - or even run a custom script the is compiled into the binary and can be invoked through a command line option.
So two questions:
1) I vaguely remember that node allowed to include custom modules during build time but I went through the latest 5.9.0 configure script and I can't see anything related - or maybe I am missing it.
2) Did someone already do something similar? If yes, what were the best practices you came up with?
I am not looking for something like Electron or other binary bundlers but actually building into the node binary.
Thanks,
Andy
So I guess I figure it out much faster that I thought.
For anyone else, you can add any NPM module to it and just add the actual source files to the node.gyp configuration file.
Compile it and run the custom binary. It's all in there now.
> var cmu = require("cmu");
undefined
> cmu
{ version: [Function] }
> cmu.version()
'It worked!'
> `
After studying this for quite a while, I have to say that the flyandi's answer is not quite true. You cannot add any NPM module just by adding it to the node.gyp.
You can only add pure JavaScript modules this way. To be able to embed a C++ module (I deliberately don't use the word "native", because that one is quite ambiguous in nodeJS terminology - just look at the sources).
To summarize this:
To embed a JS module to your custom nodejs, just add it in the library_files section of the node.gyp file. Also note that it should be placed within the lib folder, otherwise you'll have troubles requiring the module. That's because the name/path listed in node.gyp / library_files is used to encode the id of the module in the node_javascript.cc intermediate file which is then used when searching for the built-in modules.
To embed a native module is much more difficult. The best way I have found so far is to build the module as a static library instead of dynamic, which for cmake(-js) based module you can achieve by changing the SHARED parameter to STATIC like this:
add_library(${PROJECT_NAME} STATIC ${SRC})
instead of:
add_library(${PROJECT_NAME} SHARED ${SRC})
And also changing the suffix:
set_target_properties(
${PROJECT_NAME}
PROPERTIES
PREFIX ""
SUFFIX ".lib") /* instead of .node */
Then you can link it from node.gyp by adding this section:
'link_settings': {
'libraries' : [
"path/to/my/library.lib",
#...add other static dependencies
],
},
(how to do this with node-gyp based project should be quite ease to google)
This allows you to build the module, but you won't be able to require it, because require() function in node can only be used to load built-in JS modules, external JS modules or external dynamic node modules. But now we have a built-in C++ module. Well, lot of node integrated modules are C++, but they always have a JS wrapper in /lib, and those wrappers they use process.binding() to load the C++ module. That is, process.binding() is sort of a require() function for integrated C++ modules.
That said, we also need to call require.binding() instead of require to load our integrated module. To be able to do that, we have to make our module "built-in" first.
We can do that by replacing
NODE_MODULE(mymodule, InitAll)
int the module definition with
NODE_BUILTIN_MODULE_CONTEXT_AWARE(mymodule, InitAll)
which will register it as internal module and from now on we can process.binding() it.
Note that NODE_BUILTIN_MODULE_CONTEXT_AWARE is not defined in node.h as NODE_MODULE but in node_internals.h so you either have to include that one, or copy the macro definition over to your cpp file (the first one is of course better because the nodejs API tends to change quite often...).
The last thing we need to do is to list our newly integrated module among the others so that the node knows to initialize them (that is include them within the list of modules used when searching for the modules loaded with process.binding()). In node_internals.h there is this macro:
#define NODE_BUILTIN_STANDARD_MODULES(V) \
V(async_wrap) \
V(buffer) \
V(cares_wrap) \
...
So just add the your module to the list the same way as the others V(mymodule).
I might have forgotten some step, so ask in the comments if you think I have missed something.
If you wonder why would anyone even want to do this... You can come up with several reasons, but here's one most important to me: Those package managers used to pack your project within one executable (like pkg or nexe) work only with node-gyp based modules. If you, like me, need to use cmake based module, the final executable won't work...

Resources