Best practice regarding dependencies - node.js

Which is the best practice while saving package.json dependencies?
For example, i see that lot's of dependencies are not fixed, like:
"tslint": "~5.11.0"
I would like to have fixed dependencies, so that will not change in the future when new developers join a team.
I have little knowledge about package-lock.json and shrinkwrap, but I'm not sure about the "best practice" on this.
On this case is an Angular app, but it can be everything. Keeping for example package-lock.json on the repo caused some issues in the past (i know! it is a best practice to push it!)
Any thoughts?

Short answer: Carets (^) and committing your package-lock.json is probably your best approach. This ensures developers always get the same dependencies, and is the least surprising.
Why package-lock.json?
npm specifically recommends you commit your package-lock.json.
It is highly recommended you commit the generated package lock to source control: this will allow anyone else on your team, your deployments, your CI/continuous integration, and anyone else who runs npm install in your package source to get the exact same dependency tree that you were developing on.
(from the npm documentation)
You mentioned pushing package-lock.json to your repository caused some issues in the past. I'm guessing this was due to this issue where the package lock was being ignored and rewritten every time anyone installed anything. This was not the correct behaviour and was fixed in npm#5.4.2, according to this answer.
What you should not do is leave out the package-lock.json and just specify exact versions in your package.json. If you do this, your top level dependencies will look nice and consistent, but their dependencies won't have locked down versions. This way, you're almost as likely to run into bugs, but they'll be harder to find.
Why not npm-shrinkwrap.json?
You also mention shrinkwrap files. Shrinkwrap files are meant for
applications deployed through the publishing process on the registry
(from the npm documentation)
You probably aren't npm publishing your angular webapp, so there's no reason to use npm-shrinkwrap.json.
Why use caret ranges?
I can't find any documentation saying caret (^) ranges are best practice, but I believe they are.
npm makes caret ranges the default option, so it's clear they think this is best practice, though I can't find any of their documentation to justify it.
Using the default is the least surprising approach. If I saw any other kind of version in a package.json, I'd assume it was changed for a good reason, and would be hesitant to update the package without knowing what that reason is, even if it really needed to be updated.
If you ever decide to update all your dependencies at once, caret ranges will serve you well. You're dependencies will normally be locked, but deleting your package-lock.json and rerunning npm install will automatically install the latest versions that are supposedly backwards compatible with the versions you specified (see the npm docs for details on the caret range).
In summary, it's standard to use caret ranges and a package-lock.json. This fulfills your requirement of fixed dependencies, and provides a few other benefits, so it's best to do what's standard, unless you find another reason to change.

Related

what is the use of package-lock.json?

Can anyone please let me know the exact use of package-lock.json file ?
Though many have mentioned that it is used for viewing the versioned dependency tree.
Looking for simple and easier explanation.
Thanks in advance.
npm install use this file to make sure the packages it is going to install, is the same as dictated in this file. It makes npm install operation consistent across machines. Thus, you will be more unlikely to nuke the node_modules folder.
On top of consistent packages view, GitHub also use package-lock.json to scan if your repository contains known security vulnerability.
You can use lock-walker to visually walk the dependency tree in package-lock.json, especially useful when checking out security vulnerability.
I think that npm documention is quite explanatory.
Its main purpose is to provide
single representation of a dependency tree such that teammates, deployments, and
continuous integration are guaranteed to install exactly the same dependencies.
so that for example on a different system and/or by different people, the same dependencies (and same versions) will be used.
For a better explanation see this
Hope this helps.

When to add a dependency? Are there cases where I should rather copy the functionality?

I lately helped out on a project, where I added a really small dependency - in fact, it only contained a regular expression (https://www.npmjs.com/package/is-unc-path).
The feedback I got from the developer of the project was that he tries to minimize third-party dependencies if they can be implemented easily - whereby he - if I understand it correctly - asks me to just copy the code instead of adding another dependency.
To me, adding a new dependency looks just like putting some lines of code into an extra file in the repo. In addition, the developers will get informed by an update if the code needs a change.
Is it just a religious thought that drives a developer to do this? Are there maybe any costs (performance- or space-wise, etc) when adding a dependency?
I also had some disputes with my managers once concerning the third party libraries, the problem was even greater he got into believing that you should version the node_modules folder.
The source of any conflict usually is the ignorance.
His arguments were:
you should deliver to the client a working product not needing for him to do any other jobs like npm install
if github, npm is down in the moment when you run npm install on the server what you will do ?
if the library that you install has a bug who will be responsible
My arguments were:
versioning node_modules is not going to work due to how package dependencies work, each library will download his own node_modules dependencies and then your git repository will grow rapidly to hundreds of mb. Deploy will become more and more slow, downloading each time half a gb of code take time. NPM does use a module caching mechanism if there are no changes it will not download code uselessly.
the problem with left-pad was painfull but after that npm implemented a locking system and now for each package you just lock to a specific commit hash.
And Github, and npm does not have just a single instance service, they run in cloud.
When installing a dependency you always have some ideas behind and there are community best practices, usually they resume to: 1. Does the repo has unit tests. 2. The download number 3. When was the latest update.
Node.js ecosystem is built on modularity, it is not that node is so popular cause of some luck, but cause of how it was designed to create modules and reuse them. Sometimes working in node.js environment feels like putting lego pieces together and building your toy. This is the main cause of super fast development in node.js. People just reuse stuff.
Finally he stayed on his own ideas, and I left the project :D.

Does every git branch of an NPM project have different node_modules dependencies?

I assume that when developing an NPM project, that every git branch (or whatever version control system you use) probably points to a different set of node_modules on the filesystem. Is that true? How does that work? Does it pose any problems for diskspace etc?
Or perhaps, since node_modules is most commonly .gitignore'd, then the node_modules files are shared between Git branches? Again, how would/does that work?
*Note that Node.js / NPM is fundamentally different than other platforms/languages since dependencies are typically stored locally to a proejct rather than in some central location on a machine.
By convention, one should not add any files, libraries or binaries which can be generated or pulled in from an external source. This includes things like node_modules; since that is made readily available* once you do npm install, there's no reason or incentive** to want to put that into your source control. At worst, it will also bloat your repository, filling your diffs with things you simply don't control and don't necessarily want to review.
I would not expect different Git branches of an NPM project to contain different node_modules folders. I'd only expect the one node_modules folder, and if a branch gave me fits about dependencies, I'd look to reinstall the dependencies (and note it down to be sure that something else hadn't gone awry).
As an addendum, any files or folders in .gitignore are simply not indexed or tracked by Git. If the contents of those files or folders change, Git is none the wiser. This also means, when switching between branches, the contents of the files or folders in .gitignore remain the same.
*: Provided that the library you're using isn't suddenly yanked. Or the repository is not impacted by a colossal DDoS.
**: There may be some incentive to do this given that the reliability of certain NPM packages hasn't been 100% this year, but that's a team and architecture-driven decision, and I doubt that placing it into source control is the most ideal and convenient way to deal with it.
There are two schools of thought, and both have merit.
1) Never check in node_modules and rebuild on deploy/install
The approach relies heavily on NPM and the connectivity of your deploy environment. node_modules are downloaded and installed (and/or compiled) each time the deploy is run.
Positives:
Your repository is much smaller.
NPM modules are installed in the environment they will run on.
Concerns:
Tied to 3rd party for sources - Go read about that whole left-pad thing. If one dependency cannot be downloaded, your entire build system is hung out to dry. "Cranky and paranoid old timers" will cite this as the reason to check everything in (or run your own private NPM somewhere).
Branch management - Like you mentioned in the question, some branches might not have the same dependencies. Dev1 adds a new features and used a new package. Now Dev2 runs the dev branch or whatever, and everything is broken and they need to know to npm install the new package. More subtle is the case where a npm package is version changed (now you need npm update as npm install will say nothing has changed), or where their node_modules are upgraded to work on "new feature 10" but they need to clear everything out to "downgrade" to go fix "prior bug 43". If you are in active development with a team of more than 2-3, watch out for this one.
Build Time - If it is a concern, it takes a little longer to download and install everything. Or a lot longer.
2) Always check in everything you can
This approach includes node_modules as part of the repo.
Positives:
Not dependent on 3rd party sources. You have what you need to run. You code can live on its own forever, and it does not matter if npm is down or a repo is deleted.
Branches are independent, so new features from Dev1 are auto included when Dev2 switches to that branch
Deploy time is shorter because not much needs to be installed.
Concerns:
Repository is much larger. Clones of code take longer as there are many more files.
Pull Requests need extra care. If a package is updated (or installed) along with core code, the PR is a mess and sometimes unintelligible. "500 files changed", but really you updated a package and changed two lines of core code. It can help to break down into two PRs - one that is is a mess (the package update) and one that is actually reviewable (the core code change). Again, be prepared for this one. The packages will not change too often, but your code review takes a little longer (or a little more care) when they do.
OS Dependent Packages can break. Basically anything that is installed/compiled with gyp can be OS dependent (among others). Most packages are "pure JS" and, being just scripts, run everywhere. Imagine all your devs run and test on OSX while you deploy to Linux, you cannot check in those packages that were compiled on a MAC because they will not run on Linux. An odd workaround for this is to define most packages as "dev dependencies" (--save-dev) and the ones that need compiled as normal ("production", --save), then you run npm install --production so the dev dependencies are not installed (and are already present), but the others are.
Conclusions
It depends. (Don't you hate hearing that all the time? : )
Depending on your team and your concerns, you might go either approach. Both have their merits, and you will decide which is more beneficial to you. Both have drawbacks as well, so be aware of those before you get bit!
Personally I ignore .node_modules but I have different package.json in different branch and when I switch I reinstall the dependencies
Two branches having different set of node modules is in scenario where one branch is in development phase and other is your production branch. In such cases development branch will have more node modules than production. If I am not wrong any other scenario might get you in trouble.
Pushing node_modules to remote version control repository is bad practice hence just rely on npm install whenever you clone a branch or pull the code to download any new node module added to package.json.
Apparently, since you don't have your node_modules in your actual repository, you need to install node modules again and each branch might have its own requirement, as you might update your server.js with new dependency and you also need to make sure you have these newly added node dependencies in your production server as well.

Fallback options for npm failure caused by unpublish

We have a node.js project, and we want to start managing its dependencies using npm's package.json with specified versions for each dependency.
However, we are afraid that one of the packages our project depends on might get unpublished. Should I worry about unpublishing or is it a rare occurrence? What is the most effective way to handle this kind of problems?
It is very rare occurence. Never happened to me.
Unpublish is mostly used to remove a published version in which a major bug is reported. Thus, automatic semantic versioning upgrade will not fetch this version until a new one is published.

How to deal with local package dependencies in nodejs with npm

How should we deal with local packages that are a dependency in other local packages?
For simplicities sake, say we have the follow packages
api - express application
people - a package to deal with people
data-access - a package that deals with data access
And then the dependencies are
api depends on people
people depends on data-access
Currently we have these dependencies setup as file dependencies.
I.e. api package.json would have
"dependencies": {
"people": "file:../people"
}
Trouble with this is that we're finding it a PITA when we make updates to one package and want those changes in the other packages that depend on it.
The options we have thought of are:
npm install - but this won't overwrite previously installed packages if changes are made, so we have to delete the old one from the node_modules directory and re-run npm install... which can be niggly if the package dependency is deep.
npm link - we're not sold on the idea because it doesn't survive version control... Just thinking about it now, maybe we have some kind of local build script that would run the npm link commands for us... this way it could survive version control. Would that be a grunt job?
grunt - we haven't dived too deep into this one yet, but it feels like a good direction. A little bit of googling we came accross this: https://github.com/ahutchings/grunt-install-dependencies
So, what option would work best for our situation?
Are there other options that we haven't thought of yet?
Ps. we're a .NET shop doing a PoC in node, so assume we know nothing!
Pps. if you strongly believe we're setting up our project incorrectly and we shouldn't have smaller individual packages, let me know in the comments with a link to some reading on the subject.
So, I agree that going with 'many small packages' is usually a good idea. Check out 12factor.net if you haven't already.
That said, in specific answer to your question I'd say your best bet is to consider mainly how you want to maintain them.
If the 'subcomponents' are all just parts of your app (as, for example, data-access implies), then I'd keep them in the same folder structure, not map them in package.json at all, and just require them where you need them. In this case, everything versions together and is part of the same git repository.
If you really want to or need to keep them all in separate git repositories, then you can do npm link, but to be honest I've found it more useful to just use the URL syntax in package.json:
dependencies: {
"people" : "git://path.to.git:repo#version.number"
}
Then, when you want to explicitly update one of your dependencies, you just have to bump the version number in your package.json and run npm install again.

Resources