Is it necessary to delete node_modules before executing an 'npm install' for a more reliable installation or does npm overwrite everything?
Npm definitely does not overwrite everything every time. It runs through a fairly complicated process that I won’t dive deep into here, but in general you should not need to delete node_modules every time. npm exists to handle this situation and generally will only download newly added or updated packages.
I will occasionally find myself in the need to completely remove the node_modules directory in the event that I have been add/removing/updating a large number of packages. Sometimes packages can get cached. But it’s not a common occurrence and I usually only recommend it when you notice a package appears out of date.
Related
I work at a largish project with ~10 devs. We have package.json and the resulting package-lock.json committed, and our ci pipeline does npm ci to restore packages according to package-lock.json.
Currently, the developers are instructed to clone the repo and run npm install. However, I found that npm install will install different versions that match the version spec in package.json - for example, ^5.0.5 might cause npm install to install version 5.1.1, or to keep 5.0.5 if it was already in there.
So, I want to change the instructions for developers to:
(common case) If you don't want to change packages or package versions, only use npm ci
If you do, use npm install and/or npm update (possibly with --save-dev), test locally, and then commit the resulting package.json and pacakge-lock.json.
Are these instructions sound? Am I missing something?
Per documentation "this command is similar to npm install, except it's meant to be used in automated environments such as test platforms, continuous integration, and deployment -- or any situation where you want to make sure you're doing a clean install of your dependencies." (emphasis mine).
I prefer using it instead of "install", because it gives some insurances about state of node_modules folder.
It will remove modules folder, if it is present, which will remove everything that is not in lock file, but may accidentally be present from previous install.
It will throw an error if someone changed dependencies by hand and didn't updated lock file.
It will be faster than install, because it doesn't need to build new dependency tree, and it will preserve versions of dependencies which were installed by tag (like latest or next) or by wild card (*). And sometimes this is a very good thing - recent colors incident is a good illustration.
Basically it means that me and all my colleagues will get identical node_modules folder contents. One of the advantages of Yarn in early days were reproducible installs with lock-file, and it is considered a good practice.
What does npm i --package-lock-only do exactly? The documentation is a tad shy on examples. https://docs.npmjs.com/cli/v6/configuring-npm/package-locks
I'm curious to know if I have older packages in my local node_modules folder and no package-lock.json file, will npm i --package-lock-only generate a package-lock.json according to the version in my local node_modules folder or will it generate a package-lock.json with newer package versions that is consistent with the semver ranges in the package.json that's published in the npm registry.
It will determine versions of packages to install using package.json, and then create a package-lock.json file with its resolved versions if none exists, or overwrite an existing one.
Significantly, it does not actually install anything, which is what distinguishes it from regular npm install (or the aliased npm i).
Well, #Ben Wheeler is acurate, but there's a place to give a little background on this process. In regular situation the package-lock is meant for set a complete dependency tree of every package and it's dependencies in your application, so every developer on a different machine will have the exact same tree. This is important because the dependencies packages might be updated with time and if every developer will use different versions it could break your application. So every time you do "npm i" if you do have a package.lock.json it actually install the packages from there and not from package.json.
Sometimes when developers have a dependencies errors they tend to delete the lock file and the node_modules. which is not always the best option. Most of the time it's enough to update only the lock file to reflect the package.json with the flag --package-lock-only, and then you can do again "npm i" to install your packages. The lock file should be committed to your project repo so everyone can use it to have the same packages version.
package-lock.json is automatically generated for any operations where npm modifies either the node_modules tree, or package.json. It describes the exact tree that was generated, such that subsequent installs are able to generate identical trees, regardless of intermediate dependency updates.
This file is intended to be committed into source repositories, and serves various purposes:
Describe a single representation of a dependency tree such that
teammates, deployments, and continuous integration are guaranteed to
install exactly the same dependencies.
Provide a facility for users to "time-travel" to previous states of
node_modules without having to commit the directory itself.
Facilitate greater visibility of tree changes through readable source
control diffs.
Optimize the installation process by allowing npm to skip repeated
metadata resolutions for previously-installed packages.
As of npm v7, lockfiles include enough information to gain a complete
picture of the package tree, reducing the need to read package.json
files, and allowing for significant performance improvements.
I'm not a node expert by any means. In one project, something's gone wrong somewhere, and package-lock.json and package.json seems to have fallen out of sync. The only way I can get stuff to build is this sequence:
rm -rf node-modules
npm install
rm package-lock.json
npm install
webpack
i.e. I have to run npm install once with package-lock, and then once without. There are a lot of dependencies, and tracking down which ones are needed is proving difficult. What's the best way of resolving this so that I don't need to npm install twice? And how can I prevent this sort of thing not arising in the future?
Note: Two different devs were working on this git repo, and it's very possible that the package-lock and package files were not checked in properly.
Could You paste Your packake.json and package-lock.json files here?
If they are too big - put them in some fillde and give link.
(srry that this is an answer, not a comment, but don't have 50 rep needed).
It turns out the issue was to do with a specific version of #types/react-redux. There's a breaking change between 4.4.40, and 4.4.41. I was previously using ^4.4.40, so when the package-lock was being used, it was fetching 4.4.40, and everything worked. When I deleted the package lock, the ^4.4.40 moniker downloaded 4.4.41, as it was latest, and things broke. Changing the version from ^4.4.40 to just 4.4.40 has fixed it for now.
I would like to install the dependencies for some_project. I know I could cd into some_project and then run yarn install
But I was wondering if it's possible without changing the directory?
There's a bug with yarn that prevents the --modules-folder option from working as intended, however I personally don't like the fact that the option has to be provided in the command line; how would you make sure that on future installs they get installed to your chosen folder?
That's why I came up with this sneaky solution for npm since I wanted full control of the install path, not just into a prefixed version of node_modules. It will work just as well with yarn, in fact we use it in production and we haven't had a problem yet (fingers crossed).
In a nutshell, you need to symlink node_modules to your desired folder in the preinstall event (to trick npm or yarn) and then delete the symlink in the postinstall event.
There's one caveats however, which I didn't mention in the linked answer (since I didn't think of it at the time) and that is, things might not go as planned if the install goes horribly wrong; you'll end up with the symlink in place since the postinstall event might not have been triggered, and then the next install might fail since the symlink already exists.
This started happening just recently, but every time I run npm install I end up getting dozens of node modules beyond what's listed in package.json.
This answer shows that this is a new feature of npm 3 where the dependencies are being "flattened" instead of nested. However, I don't want to look at a bazillion modules every time I venture into the folder. Is there any way I can disable this setting?
No, that cannot be disabled.
https://github.com/npm/npm/issues/10079
is there anyway that I can force npm#3 to install new package for me, but old way? So without calculating project-wise tree. I just want the new package to be placed in node_modules with its dependencies in its node_modules?
No. The new installer is pretty much a complete rewrite, and while there is some special-case code to install packages into siloed subdirectories, that's only available when doing global installs, to simplify packaging and managing shared tools.