Do you know any tool that monitors breaking changes in github npm or bower projects?
I'd like to list all changes from commit history that are marked with breaking change and list them.
I am using npm-check-updates, that tells me what is new, but it doesn't tell me what has been changed since.
Recently, I have found greenkeeper.io, but as far as I know it doesn't list what is new, it just simply does upgrade and see if your tests are still running. If tests fails, you have to fix it yourself.
In ideal opensource world author marks every breaking change. Also he writes code without bugs. In the real world is not true. You have opensource packages for free, but with possible bugs and undocumented changes API.
There is only working answer, testing your application for each dependencies update. You should have tests and shrinkwrap file.
greenkeeper.io is amazing tool. It makes pull request with dependency changelog. Check example
Related
At work we're using a very old template (generated around April 2021, so node v14.19) which has an out of sync package-lock.json. This means that if you do:
rm package-lock.json && npm install
The install will fail due to conflicting dependencies.
For a couple of weeks me and my teammates tried to fix this, but we haven't succeed yet: when you fix the dependencies you break eslint, when you fix eslint you break deployment, when you fix deployment then the logger will stop working and so on. We have thousands of dependencies that are turning out to be an hell to maintain.
The pov of our CTO is that we simply shouldn't delete the package-lock.json, but this means we can't update node and we are stick with what I think is a huge technical risk.
Do you think it's fixable? Have you ever been in a similar situation?
Is not deleting the package-lock.json enough to avoid the problem?
If not, how could I produce an example where I can break the flow? maybe by installing a modern package that is incompatible with the old resolution?
This is one of the reasons why both modular code and teams are important. Chunking the code up to one big file or just a few files will cause this kind of mess.
Do you think it's fixable? Have you ever been in a similar situation?
Yes
Is not deleting the package-lock.json enough to avoid the problem?
No. As said in the comments, it is a big security risk not to keep your dependencies up to date. That's why you have package managers like npm. Plus, new vulnerabilities are openly discussed in different forums, so not only are scrupulous elements aware of them, good intentioned programmers are aware of them and will definitely judge your software to be of low quality. Also, you put your clients at a big risk of running into trouble with your software and put your company at risk of facing litigations.
If not, how could I produce an example where I can break the flow? maybe by installing a modern package that is incompatible with the old resolution?
My Suggested Solution:
Leave the code as is (Good thing it is still working even with new input data).
Modularize the code. Put chunks of related functionalities into separate files and import them into the main code (make sure everything is still working).
Assign teams to the seperate files (modules) to build new versions of the code (there also have to be modular testing implemented here so you can test each module independent of the main file).
For each test unit, make sure they have their own up-to-date package.json file.
Integrate everything into a new project software.
One advantage of doing things this way is that the main file code rarely changes. Also, each module can be updated independent of the main program and other modules. Only downside to this approach is that you have to manage the package.json file intentionally so that dependecies do not conflict or are not recursive during integration, thereby (sometimes) requiring a seperate team to manage integration.
I try using preinstall npm scripts, but it only run when I checkout the project into a new space, and run "npm i" standalone
I need a solution to run a script before the new dependency writed into package.json. It doesn't depend the type of dependency: dev or prod. All of them need to check.
For example, when a new developer join to the team, and want to add new dependency which has known vulnerability, this script stops the action before the package.json was changed, and show warning message for the developer
There isn't a way to do that with npm scripts. So, unless you feel like implementing one you're going to have to adjust your process. Start by identifying all the problems you're trying to address with an on-dependency-install hook.
You give the example of preventing the installation of a dependency or dependency version. That's not a problem: it's a solution you've identified for the problem. Figure out what the actual problem is, and then reevaluate your solution to see if it's really the most appropriate measure to take.
Possibly (probably) you are afraid of vulnerable code making it up to production. That's a problem definition you can work with. What possible solutions exist? You've already identified the blacklist. But not only is that not supported by your tooling, even if it were the onus is then on you to keep the blacklist up to date. Given just how quickly the Node world moves, that's enough work to keep several people employed fulltime. And that's not even getting into deploying it to your developers.
The good news is that that's not the only solution: you could establish procedural safeguards against integrating vulnerable code. If you're using a distributed VCS like Git, pull requests are right there: disable pushing commits to the master or development branch, have developers work in feature branches and submit pull requests, then review those pull requests and screen any new dependencies for vulnerabilities when they show up. If you're using something like SVN, you can use feature branches with code reviews to similar effect. Your developers get extra eyes on their code looking for vulnerabilities, optimizations, edge cases, and so forth; you don't waste time screening dependencies that nobody ever tries to integrate. And nobody has to worry about getting the latest copy of the blacklist. For this particular scenario, everybody wins with a process solution over a technical solution.
If you have other reasons for wanting to fire scripts when dependencies are installed, try working back to the root of the problem the same way. The way Node dependency management and module interactions work, you'll probably discover it's preferable to develop better process habits.
If you are using git, you can use pre-commit/push hooks, the result is pretty much the same, no vulnerabilities in code base.
For exemple with husky and nsp you could do something like this :
{
"scripts": {
"prepush": "nsp check"
}
}
Riffing off Gabriel's suggestions, since you are concerned about devs wasting time when the lib they add fails an nsp check... You can use an editor extension to run the nsp check as they code. Then have Husky do a pre-commit nsp check as well.
I would also recommend Greenkeeper.io to prevent vulnerabilities before they are found.
If the main concern is that these vulnerable packages are being run within your network (since there's no way to prevent those devs from using those packages in general), you could mirror a subset of the npm registry that you consider safe, or manually add known safe dependencies to that mirror, and block access to the main registry https://registry.npmjs.org/ at the network level. This would mean your developers are stuck waiting for the mirror to be updated, but would require somebody to at least stop and think before they're able to install a problematic module.
I lately helped out on a project, where I added a really small dependency - in fact, it only contained a regular expression (https://www.npmjs.com/package/is-unc-path).
The feedback I got from the developer of the project was that he tries to minimize third-party dependencies if they can be implemented easily - whereby he - if I understand it correctly - asks me to just copy the code instead of adding another dependency.
To me, adding a new dependency looks just like putting some lines of code into an extra file in the repo. In addition, the developers will get informed by an update if the code needs a change.
Is it just a religious thought that drives a developer to do this? Are there maybe any costs (performance- or space-wise, etc) when adding a dependency?
I also had some disputes with my managers once concerning the third party libraries, the problem was even greater he got into believing that you should version the node_modules folder.
The source of any conflict usually is the ignorance.
His arguments were:
you should deliver to the client a working product not needing for him to do any other jobs like npm install
if github, npm is down in the moment when you run npm install on the server what you will do ?
if the library that you install has a bug who will be responsible
My arguments were:
versioning node_modules is not going to work due to how package dependencies work, each library will download his own node_modules dependencies and then your git repository will grow rapidly to hundreds of mb. Deploy will become more and more slow, downloading each time half a gb of code take time. NPM does use a module caching mechanism if there are no changes it will not download code uselessly.
the problem with left-pad was painfull but after that npm implemented a locking system and now for each package you just lock to a specific commit hash.
And Github, and npm does not have just a single instance service, they run in cloud.
When installing a dependency you always have some ideas behind and there are community best practices, usually they resume to: 1. Does the repo has unit tests. 2. The download number 3. When was the latest update.
Node.js ecosystem is built on modularity, it is not that node is so popular cause of some luck, but cause of how it was designed to create modules and reuse them. Sometimes working in node.js environment feels like putting lego pieces together and building your toy. This is the main cause of super fast development in node.js. People just reuse stuff.
Finally he stayed on his own ideas, and I left the project :D.
We have a node.js project, and we want to start managing its dependencies using npm's package.json with specified versions for each dependency.
However, we are afraid that one of the packages our project depends on might get unpublished. Should I worry about unpublishing or is it a rare occurrence? What is the most effective way to handle this kind of problems?
It is very rare occurence. Never happened to me.
Unpublish is mostly used to remove a published version in which a major bug is reported. Thus, automatic semantic versioning upgrade will not fetch this version until a new one is published.
In doing research on whether Node's node_modules should be checked into your version control repository, the most recent consensus seems to be that you should include it for applications you deploy.
Sources:
Check in node_modules vs. shrinkwrap
Should I check in node_modules to git when creating a node.js app on Heroku?
https://www.npmjs.org/doc/misc/npm-faq.html#Should-I-check-my-node_modules-folder-into-git
In reading these arguments, it lead me to question whether Composer's /vendors directory should also be checked into version control. Composer's documentation suggests that you don't:
Should I commit the dependencies in my vendor directory?
The general recommendation is no. The vendor directory [...] should be added to .gitignore/svn:ignore/etc.
The best practice is to then have all the developers use Composer to install the dependencies. Similarly, the build server, CI, deployment tools etc should be adapted to run Composer as part of their project bootstrapping.
While it can be tempting to commit it in some environment, it leads to a few problems:
Large VCS repository size and diffs when you update code.
Duplication of the history of all your dependencies in your own VCS.
[...]
Contrasting that argument is this one (source):
Doesn’t checking in node_modules create a lot of noise in the source tree that isn’t related to my app?
No, you’re wrong, this code is used by your app, it’s part of your app, pretending it’s not will get you in to trouble. You depend on other people’s code and they are just as likely to write new bugs as you are, probably more so. Checking all of that code in to source control gives you a way to audit every line that ever changed in your application. It allows you to use $ git bisect locally and be ensured that it’s the same as in production and that every machine in production is identical. No more tracking down unknown changes in dependencies, all the changes, in every line, are viewable in source control.
In summary, the question is this: Why would one gitignore (i.e. not version control) node_modules but not do the same for Composer's vendor/ directory?
The reason to commit external dependencies is
it's easier to deploy with git pull
the code used is directly included in the commit anyone checks out
Arguments against this are
Git is no deployment tool
all dependency managers do have a way to make exactly sure the code used can be fetched
I don't know anything about npm, but for Composer that last point is implemented by committing composer.lock.
I don't think the "audit code" argument is a valid one in every case. If you do write software that get's audited by itself, and subsequently needs all libraries to be audited, then probably all code changes need to be conserved. This isn't true for the general case.
git bisect works still as well with a committed composer.lock. It does require installing the dependencies with every bisect step, but this is just one simple step, probably already done in the automatic test suite run.
The last thing to worry about is when used packages go offline. This really is more of a problem with Composer, because there is no central hosting of the downloadable releases (npm probably does this). If this is a problem, either commit the code (and try to figure out how to update that missing package in the future), or setup an instance of Satis to create a local copy of the code you use.
Putting all your modules in you VCS makes it really heavy to download or upload. For example, I work on two node.js projects and the total node_modules directories size is between 250MB and 500MB whereas the whole code with assets is generally less than 40MB. Every Node.js developer likes Node lightness, so the code must stay easy to download and share.
For the second point, an alternative to avoid regressions is to be more restrictive in your package.json dependencies versions. You will find more information here.
Finally the best argument is to take a look on the famous modules everybody know :
express ignores
node_modules
mocha ignores
node_modules
q ignores node_modules
...
The more I research this the more I'm starting to think that there is no one correct answer to this, just different opinions as well as pros and cons of each method.
This blog, looking from the context of Bower, does a good job of weighing the pros and cons of each: http://addyosmani.com/blog/checking-in-front-end-dependencies/.
In short: At least for right now, weigh the pros and cons and decide what best fits your situation.