How to find out which Node.js packages to update after updating one of them? - node.js

I am developing an application that consists of several Node.js packages. Of course, they aren't relied on only by the application itself, but some of the packages provide so essentially basic things, that they are also required by other packages (think logging, e.g.).
Now, if I update a package, I have to update all the other packages' package.json files to make sure that I consistently use the new version everywhere. Unfortunately, this results in a (recursive) loop. And of course, the order in which I update the packages is important because I only want to touch each package once, not multiple times.
How do I figure out, once I made a change, what packages to update, and in which order?
Currently, I am using two custom-developed small tools called reqd and forany to at least find out which other packages rely on an updated one, but this is only the first step in a number of steps.
How could I solve this?

Related

Best way to distribute modules used the same framework

I am creating my first open source project, and I am making some plugins for it. These plugins will be published as npm packages, and they will have identical dependencies.
My question is, what I the best way to deliver them and avoid code repetition? I know I can use something like Rollup.js to pack all dependencies used by that module in the final distribution js file, but if the user is using multiple modules, the inlined dependencies will be repeated and make the file bloat.
I know end user can use a bundler to remove those repeated codes, but is there anything more I can do to reduce the size of my distribution js files?

Should I keep all sub-packages on a single version in package.json?

There is a 3rd-party library my project uses that has split its functionality into multiple imported packages so that a project can install just what it needs. In package.json, several entries are present for the different sub-packages, like...
"dependencies": {
"#lib/dogs": "^1.0.3",
"#lib/cats": "^1.0.3",
"#lib/iguanas": "^1.0.3"
...lots more of the same...
}
I don't want to spend time thinking about compatibility issues if one of the sub-packages installs a different version# than the others through semver-range-picking or another developer fixing a problem by incrementing the version on just one sub-package. I suspect there is some risk of bugs if the sub-package versions get out of sync, even if the intent of the package maintainers is to respect the meaning of breaking changes in their versioning. It seems simpler to just have all the sub-packages on the same version by default.
Should I try to enforce (or at least promote) that the sub-packages have the same version?
Promote, but don't enforce.
Your current set-up, which uses Caret Ranges is the default used when installing with the --save flag for a reason: it's the most flexible and robust range to use for dependencies that correctly follow the semver conventions. This means that whenever someone update's your module as a dependency to theirs, it will automatically bump their sub-dependencies to the latest version that is backwards-compatible with the one explicitly specified after the ^.
Because of this, and the fact that scoped packages don't have interdependencies since they behave identically to normal dependencies, leaving identical caret ranges for each of them should already be sufficient enough to avoid compatibility issues by default.
Don't protect developers from themselves
A good methodology to follow when considering how to deal with compatibility issues is to avoid the antipattern of "protecting developers from themselves." In this situation, you propose to put a lock in place that prevents 3rd parties from editing the relative versions of your dependencies, to avoid compatibility issues. This is a very vague goal since you haven't actually run into any problems yet, as you've pointed out.
Sometimes, yes, developers might not know what they're doing, in which case they'll probably avoid tampering with your default dependency versions, but sometimes they do know, and it can be frustrating when a developer knows they can resolve a bug and are unnecessarily prevented from doing so. So hold their hand, don't cuff them.
npm already chose to avoid this antipattern, you should too.
If a 3rd-party developer chooses to use your module as a dependency, they should have the default amount of freedom available to manage their sub-dependencies through npm by using features like package-lock.json, which unlocks a very clean pattern for precisely managing sub-dependency versions without editing the source code of their dependencies.
In conclusion, what you have now is a very clean and flexible approach, following common conventions and not going out of the way to constrain 3rd-party developers.

Keep installer size down by reusing files / components

Let's say I have 2 features, both use abc.dll, and both reference it from their respective current directories.
So the output will look like this :
Feature1
abc.dll
Feature2
abc.dll
I've created 2 components for this. In reality I have many features and many dll's that are shared, and my installer size is nearly 1GB.
What I am looking for is a smarter way to do this, using IS 2015 professional.
What I've looked at so far:
Merge modules: Not sure if this would work, also it means I need to maintain the merge modules manually should files be upgraded.
DuplicateFile, via direct editor, but this wouldn't work because there is no way to have this bound to a feature, only a component.
A hidden feature which would install the shared files to the target system, then a post script which would copy these files to their respective features, and delete the folder of this feature.
Is there a best practice method to implement what I need?
The most suitable approach in this case is, indeed, merge modules. I am not sure why the concern about maintaining them - you should have an automated build process that creates all merge modules and then builds your main installer with the newly created modules.
However, in my opinion, merge modules are a bit cumbersome to use if you have a lot of custom actions.
An alternative to merge modules - assuming you are using a Windows Installer project - is using small MSI packages which you "chain" to your main installer (you can chain multiple packages with different conditions and supply different properties). Here too, you should have a build process which builds all those small msi packages and then builds the main installer.
If you don't want to have this kind of 'sub-projects', then the option of a hidden feature with a post action is acceptable, I've seen it, and done it, a few times. Note that if you target Windows 7 or later, instead of physically copying the files and deleting them, you can use symbolic links (using the mklink command), which helps reduce the installation's foot print on the target system (and make patching easier - you replace the original file, and all its links are updated automatically).

Node installs loads of modules

I am trying to install some node modules for my application.
Now after entering this command: npm install laravel-elixir it creates a folder node_modulesand installes over a hundred modules!! this cannot be right.
How would I solve this problem?
How would I solve this problem?
Write your own code from scratch.
Really, there's very little that can be done. Large dependency trees are very common in Node.js. A lot of modules are built on the backs of other modules. The module in question is an especially large piece of software, trying to do what seems like a lot of different things, and relying on many other modules to do so.
You can try
$ npm install laravel-elixir --no-optional
to see if you can trim some optional dependencies from the tree. Another methood is to add optional=false to your .npmrc.
In my brief, and unscientific testing this seems to drop about six dependencies from the tree. Not much.
You should also make sure you've updated to npm 3.0 (3.8.6 being the latest), as it does a better job of flattening dependencies.
Sometimes there are needless dependencies in the middle of a tree, and in that event there is not much you can do other than reach out to the maintainers, and see if these dependencies can be removed, but then all the downstream packages will need to update.
This is generally called depedency hell, and it is an unfortunate symptom of certain modules that rely on too many submodules.
In reality though, if this module does what you need it to do, and there are no ill effects of having many dependencies installed, does it really matter? Other than the install time, when using the module, can you tell that it is pulling in a lot of other modules?

How can one make a private copy of Hackage

I'd like to snapshot the global Hackage database into a frozen, smaller one for my company's deploys. How can one most easily copy out some segment of Hackage onto a private server?
Here's one script that does it in just about the simplest way possible: https://github.com/jamwt/mirror-hackage
You can also use the MirrorClient directly from the hackage2 repo: http://code.haskell.org/hackage-server/
This is not an answer two the question in the title but an answer to my interpretation of what the OP wish to achieve.
Depending of what you want for level of stability in your production circle you can approach the problem in several ways.
I have split the dependencies in two parts, things that I can use that are in the haskell platform (keep every platform used in production) and then only use a small number of packages outside that and don't let anyone (including yourself) add more packages into your dependency tree just because of laziness (as developer). These extra packages you use some kind of script for and collect from hackage (lock to version) by using cabal fetch. Keep them safe. Create a install script that uses your safe packages and if a new machine (developer) are added to your team, use that script.
yackage is great but it all comes down to how you ship your product. If you have older versions in production you need to have a yackage setup for every version and that could be quiet annoying after a couple of years.
You can download Hackage with Voker57's hackage-mirror.sh. You'll need 'curl' for it to run. If you're using a Debian based Linux distribution, you can install curl by typing apt-get install curl.
Though it's not a segment of Hackage, I've written a bash script, that downloads the whole Hackage, what can be further easily set up as a mirror using an HTTP server. Also, it downloads all required stuff like GHC compilers ready to be used with Stack.
Currently, a complete Hackage mirror occupies ~10GiB (~100000 packages of all versions) and Stack related stuff like GHC compilers ~21GiB (~200 files). Consequent runs of the script skip already downloaded stuff, so it downloads only new one. So it's a pretty convenient way to "live offline" and sync up to date when online.

Resources