How can I get a warning-free Node.js build? - node.js

When bootstrapping a new Expo project with expo init ..., I see the following warning (among about a dozen others) right off the bat:
warning expo > uuid#3.4.0: Please upgrade to version 7 or higher. Older versions may use Math.random() in certain circumstances, which is known to be problematic. See https://v8.dev/blog/math-random for details.
OK, great. I understand the concern. Then, I go over here and see that, while there are some possible breakages, I went through and verified that none of the calling code seems to run afoul of them. Then, using Yarn resolutions, I add this to my package.json:
"resolutions": {
"uuid": "7.0.3",
...
}
Next, I delete node_modules and yarn.lock, and yarn install again, and now I get this warning:
warning Resolution field "uuid#7.0.3" is incompatible with requested version "uuid#^3.4.0"
So, essentially, I've traded one warning for another. What I really want here is an error and warning free build. I'm willing to accept responsibility for the breakage I might cause by pinning "incompatible" versions. Or if I could turn these warnings on/off (one by one) somehow, it'd be suboptimal, but I'd probably be fine with that.
I come from a previous life of developing safety-critical systems in C & C++, where we operated under a doctrine of 'warnings are errors waiting to happen; no warnings allowed ever.' I've noticed from looking at many other Node.js projects that many folks working in this ecosystem seem to just say, "Warnings? YOLO!", and I can see why, TBH. When the problematic dependency is a 7 layer deep, transitive dependency (all 7 of which you didn't write, don't "own", and are unlikely to be able to edit/fix, and 6 of which you didn't even explicitly ask for) I can see how it might be easy to say, "Not my problem!" and push onward.
But is it really the case that there's just no hope for a warning free build? I thought resolutions would be the solution to this, and it was helpful for one dependency that was a "compatible" version, but I'm driving myself nuts trying to figure out how to simply get a clean build on an effectively empty (i.e. no code of my own) project. There have to be companies/teams out there with similar desires, so I'm assuming there's some solution I'm not aware of. Does anyone have any hot tips?
I've tried:
yarn resolutions (as described)
npm-force-resolutions (seems like an abomination -- it edits your packages-lock.json, requires 'double installing')
force-resolutions (which looks like just a newer fork of npm-force-resolutions, and has the same issues).
I've read about NPM v8.3's overrides (and it looks promising, TBH), but Expo's cloud building service uses NPM v8.1, so that's off the table until some nebulous moment in the future.
I just can't believe this an unsolvable problem, or that I'm somehow the first person to try to solve it. Thanks!

Related

What does cabal mean when it says "The following packages are likely to be broken by the reinstalls"

I've seen this message pop up a couple times when running cabal v1-install with a suggestion to use --force-reinstalls to install anyway. As I don't know that much about cabal, I'm not sure why a package would break due to a reinstall. Could someone please fill me in on the backstory behind this message?
Note for future readers: this discussion is about historical matters. For practical purposes, you can safely ignore all of that if you are using Cabal 3.
The problem had to do with transitive dependencies. For instance, suppose we had the following three packages installed at specific versions:
A-1.0;
B-1.0, which depends on A; and
C-1.0, which depends on B, but not explicitly on A.
Then, we would install A-1.1, which seemingly would work fine:
A-1.1 would be installed, but the older A-1.0 version would be kept around, solely for the sake of other packages built using it;
B-1.0 would keep using A-1.0; and
C-1.0 would keep using B-1.0.
However, there would be trouble if we, for whatever reason, attempted to reinstall B-1.0 (as opposed to, say, update to B-1.1):
A-1.1 and A-1.0 would still be available for other packages needing them;
B-1.0, however, would be rebuilt against A-1.1, there being no way of keeping around a second installation of the same version of B; and
C-1.0, which was built against the replaced B-1.0 (which depended on A-1.0), would now be broken.
v1-install provided a safeguard against this kind of dangerous reinstall. Using --force-reinstalls would disable that safeguard.
For a detailed explanation of the surrounding issues, see Albert Y. C. Lai's Storage and Identification of Cabalized Packages (in particular, the example I used here is essentially a summary of its Corollary: The Pigeon Drop Con section).
While Cabal 1, in its later versions, was able to, in the scenario above, detect that the reinstall changed B even though the version number remained the same (which is what made the safeguard possible), it couldn't keep around the two variants of B-1.0 simultaneously. Cabal 3, on the other hand, is able to do that, which eliminates the problem.

Use exact version numbers in package.json or not?

Common practice for version numbers of npm dependencies in package.json has been to enter exact version numbers (like 1.2.4) instead of inexact version numbers (like ^1.2.4 which allows installing bug fix releases like 1.2.5) to make sure a future installation will not break due to changes in dependencies (see for example this article).
Using exact version numbers has a drawback in that you can't automatically update bug fix versions of dependencies. This is an issue when it's nested dependencies having security fixes or bug fixes. For example, at this moment the package karma-browserstack-launcher uses browserstack, which is using an outdated version of https-proxy-agent containing a security vulnerability. This becomes very visible right now thanks to npm audit which looks for security issues in dependencies.
Since some time we have package-lock.json, which is used to lock down the version numbers of all dependencies. This may change the way we deal exact or inexact version numbers in package.json.
My question is: given package.json and package-lock.json, what is the best strategy nowadays to deal with version numbers of dependencies? Use exact versions or not? How can I deal with security issues in nested dependencies if they don't get upgraded?
My feeling is that
packages that are libraries and meant to be used to others should have inexact version numbers and should specify the minimum they require in order to work; and
top-level projects that aren't going to be included elsewhere should specify the full version numbers of their requirements, so they can have the most control over when things are updated.

Why removing node_modules is still required to fix some npm problems?

npm 1.0 was released in May 2011, yet with it's latest release as of today, I still often run into a lot of problematic situations that can only be solved by removing node_modules. Of course sometimes it is enough to reinstall some package, or rebuild another, but as of 2018, the consensus seem to be that the ultimate solution is to remove node_modules and run npm install again. I wonder - why is it the case? I'd imagine that most difficult bugs are already solved since the 1.0 release, does it have something to do with the design? I found that Yarn is free from this problem.
Much like "turning it off and on again", it's a very simple action that in almost all cases will sort out what's wrong with no damage.
Then once it's sorted, there's not much incentive for the user to pursue it, and all evidence of what the problem was will be gone.
The bug remains of low importance (because there's an easy workaround) and high difficulty (no evidence, may be hard to reproduce), so doesn't get fixed.

Should I keep all sub-packages on a single version in package.json?

There is a 3rd-party library my project uses that has split its functionality into multiple imported packages so that a project can install just what it needs. In package.json, several entries are present for the different sub-packages, like...
"dependencies": {
"#lib/dogs": "^1.0.3",
"#lib/cats": "^1.0.3",
"#lib/iguanas": "^1.0.3"
...lots more of the same...
}
I don't want to spend time thinking about compatibility issues if one of the sub-packages installs a different version# than the others through semver-range-picking or another developer fixing a problem by incrementing the version on just one sub-package. I suspect there is some risk of bugs if the sub-package versions get out of sync, even if the intent of the package maintainers is to respect the meaning of breaking changes in their versioning. It seems simpler to just have all the sub-packages on the same version by default.
Should I try to enforce (or at least promote) that the sub-packages have the same version?
Promote, but don't enforce.
Your current set-up, which uses Caret Ranges is the default used when installing with the --save flag for a reason: it's the most flexible and robust range to use for dependencies that correctly follow the semver conventions. This means that whenever someone update's your module as a dependency to theirs, it will automatically bump their sub-dependencies to the latest version that is backwards-compatible with the one explicitly specified after the ^.
Because of this, and the fact that scoped packages don't have interdependencies since they behave identically to normal dependencies, leaving identical caret ranges for each of them should already be sufficient enough to avoid compatibility issues by default.
Don't protect developers from themselves
A good methodology to follow when considering how to deal with compatibility issues is to avoid the antipattern of "protecting developers from themselves." In this situation, you propose to put a lock in place that prevents 3rd parties from editing the relative versions of your dependencies, to avoid compatibility issues. This is a very vague goal since you haven't actually run into any problems yet, as you've pointed out.
Sometimes, yes, developers might not know what they're doing, in which case they'll probably avoid tampering with your default dependency versions, but sometimes they do know, and it can be frustrating when a developer knows they can resolve a bug and are unnecessarily prevented from doing so. So hold their hand, don't cuff them.
npm already chose to avoid this antipattern, you should too.
If a 3rd-party developer chooses to use your module as a dependency, they should have the default amount of freedom available to manage their sub-dependencies through npm by using features like package-lock.json, which unlocks a very clean pattern for precisely managing sub-dependency versions without editing the source code of their dependencies.
In conclusion, what you have now is a very clean and flexible approach, following common conventions and not going out of the way to constrain 3rd-party developers.

what is the after element used for in JHBuild?

Can someone please tell me what the purpose of the after element in JHBuild is used for.
I've search far and wide for the description and I'm at a loss on why I cannot find anything about this.
Going on from this, I would like to know the difference between a dependencies, suggests, and after element. i.e. How does JHBuild treat these differently?
Thanks.
dependencies are hard dependencies. Packages that are required to build a module.
suggests are soft dependencies. Packages might use them if they are installed (detected at build time) but if they are not present, they do not present a problem. These dependencies will be built, but they can be ignored without problems by using the argument --ignore-suggests. For instance, evolution can be built with or without nss support.
after are not strict dependencies, but they are needed at runtime to get some features. For instance, metacity is needed by mutter to have key binding settings in mutter.

Resources