Even I build package.json and run
npm install ./
npm install dependencies of dependencies even it's perfectly equal.
Ex)
ExpressJs 4.0.0-rc4 node_modules/cookie === cookie-parser node_modules/cookie
When nodeJs load file, it's cached per file.
So, if npm does not share dependencies, same module can be parsed several time and use much more memory.
The reason is for project maintenance and consistent (and simple) usage. Now each package is perfectly independent of the others, and each package adheres to its own package.json file.
If one package updates its dependencies, NPM simply has to check that package. If it were sharing a reference, it would not only need to update the new version for the one that changed, but also switch and re-reference the old version for the other package. Or if a package was deleted that shared a reference, NPM would need to re-check all other packages to see if another was still using it. And other odd use cases when sharing.
Storage is cheap these days and most NPM modules are small in (file) size. Ease of maintenance and consistent updating is worth more than saving a few MBs of files.
Related
What does npm i --package-lock-only do exactly? The documentation is a tad shy on examples. https://docs.npmjs.com/cli/v6/configuring-npm/package-locks
I'm curious to know if I have older packages in my local node_modules folder and no package-lock.json file, will npm i --package-lock-only generate a package-lock.json according to the version in my local node_modules folder or will it generate a package-lock.json with newer package versions that is consistent with the semver ranges in the package.json that's published in the npm registry.
It will determine versions of packages to install using package.json, and then create a package-lock.json file with its resolved versions if none exists, or overwrite an existing one.
Significantly, it does not actually install anything, which is what distinguishes it from regular npm install (or the aliased npm i).
Well, #Ben Wheeler is acurate, but there's a place to give a little background on this process. In regular situation the package-lock is meant for set a complete dependency tree of every package and it's dependencies in your application, so every developer on a different machine will have the exact same tree. This is important because the dependencies packages might be updated with time and if every developer will use different versions it could break your application. So every time you do "npm i" if you do have a package.lock.json it actually install the packages from there and not from package.json.
Sometimes when developers have a dependencies errors they tend to delete the lock file and the node_modules. which is not always the best option. Most of the time it's enough to update only the lock file to reflect the package.json with the flag --package-lock-only, and then you can do again "npm i" to install your packages. The lock file should be committed to your project repo so everyone can use it to have the same packages version.
package-lock.json is automatically generated for any operations where npm modifies either the node_modules tree, or package.json. It describes the exact tree that was generated, such that subsequent installs are able to generate identical trees, regardless of intermediate dependency updates.
This file is intended to be committed into source repositories, and serves various purposes:
Describe a single representation of a dependency tree such that
teammates, deployments, and continuous integration are guaranteed to
install exactly the same dependencies.
Provide a facility for users to "time-travel" to previous states of
node_modules without having to commit the directory itself.
Facilitate greater visibility of tree changes through readable source
control diffs.
Optimize the installation process by allowing npm to skip repeated
metadata resolutions for previously-installed packages.
As of npm v7, lockfiles include enough information to gain a complete
picture of the package tree, reducing the need to read package.json
files, and allowing for significant performance improvements.
when running npm install --no-optional, it takes around 3 mins every time to complete. It installs ~ 200MB of files. I would like to speed the build process, but I cannot find any ways to really speed it up.
Doesn't npm install by default cache dependencies (like any other decent tool e.g. maven, sbt or nuget) by default? If yes, shouldn't it be much faster than that? If no, then WHY and how to work around that?
I found npm-cache package, but it seems to .tar all the dependencies and when neither of them changes, npm-cache will reuse the tar file. The downside of this is that, whenever a small change in dependencies occurs, it won't be able to reuse the cache (from what I understand).
Are there any nice resources on why this is slow and how to speed it up and how caching works with npm in general? Other tools that I have used (sbt, maven, nuget) are much faster, therefore my expectations are high for npm as well.
Another option I looked into is npm install -g, but it seems not to solve any problems here, as it is meant to be used for installing some cli tools like grunt, npm-cache and etc., as it adds them to a path. So this definitely doesn't solve the problem.
npm -v: 4.0.5
node -v: 6.8.1
The problem with node was that coming from sbt background, where sbt uses a local ivy cache to cache dependencies, I expect the same behaviour from Node. So at least up to V 5.0, Node didn't have a proper dependency caching mechanism, so you basically needed to redownload all of the dependencies every time you do a node install with a clean node_modules folder.
There were some tool developed to work around that, but none of them were satisfactory.
But it seems that this might have been fixed in Node V 5.0 with some caching strategy, therefore if you have a similar issue, please take a look at the changes for the 5th version.
It's better to install pnpm package using the following command:
npm i -g pnpm
pnpm uses hard links and symlinks to save one version of a module only ever once on a disk. When using npm or Yarn for example, if you have 100 projects using the same version of lodash, you will have 100 copies of lodash on disk. With pnpm, lodash will be saved in a single place on the disk and a hard link will put it into the node_modules where it should be installed.
As an example I can mention that whenever you want to install the dependencies of package.json file, what you should do is simply that enter the pnpm i and it handles the other things by itself. Its speed is faster than the npm, because it will reuse the dependencies that you've installed them before!
I am using angular-cli and building a SPA using Angular2. I have a Jenkins build system for my application where in every time there is a change in my project repository, a build is triggered which basically deletes the entire node_modules folder and then does npm install followed by my build process. All this is done remotely on a Linux machine.
Problem:
Now the issue I am facing is that of the secondary and tertiary dependencies. Most of the dependencies (if not all) I am installing using npm have their own packagae.json file which in turn have their own, so on & so forth. So even if I freeze the versions in my main package.json file by removing carets or tildes, there is no way I can control the version of the secondary and tertiary dependencies. This is resulting into a lot of UNMET PEER DEPENDENCY errors as one dependency needs one version of the same component while the other one needs another!
Question:
So my question is, how do I make sure that this does not happen and achieve a stable dependency installation?
You can keep your package.json as is and run npm shrinkwrap, which will create a new file npm-shrinkwrap.json with the exact versions of all package hierarchy installed at the time you ran it.
If you commit this file, the next time you run npm install, npm should detect the file and respect it.
Documentation:
https://docs.npmjs.com/cli/shrinkwrap
P.S.
Another option that works similarly is Facebook's yarn npm client (a tool similar to the local npm tool).
It uses its own yarn.lock file, and it's much faster as it caches dependencies in its own shared cache making next installs much faster.
But for your use case on the build server, it might be harder to set it up or so. That's why I emphasized the answer on npm itself.
I'm quite new to ReactNative so sorry if it's obvious, but..
Each RN project init-ed via CLI has a large number of node modules stored in project_root/node_modules. Not that I would mind, but if you have several projects it seems redundant and takes up time/space to move it to the source versioning system.
Wouldn't it be possible to retrieve all these same modules from the general node_modules on the machine instead ?
You never want to store dependencies nested in node_modules in your source control... it defeats the whole purpose of versioning and dependencies in general. Your package.json file will specify the versions so when you run npm install it knows exactly which dependencies to grab.
As an alternative, Yarn is an up and rising package client that Facebook developed that does a much better job of caching your packages locally so that way if multiple projects reuse the same depencencies, it will still satisfy the need to keep them in node_modules but doesn't need to perform http requests for each one.
Yarn doesn't replace NPM as a package registry, just a better client to download, maintain, and cache those packages.
Yarn also adds a yarn.lock file (similar to Ruby's Gemfile.lock) that allows you to lock in the specific versions used in your app, regardles of the package.json. This file can be stored in version control, which is probably what you were wanting to achieve by saving the node_modules in version control.
Some good reads...
Yarn vs NPM
Scotch.io Yarn Tutorial
Why I'm working on Yarn (Yehuda Katz)
I would echo Brad's answer: Don't put node_modules in version control. npm install will install the correct versions from the package.json. Just put package.json in version control, not node_modules.
However, if you still want to save disk space, you can install some of your dependencies in a general node_modules folder by using the link option:
npm config set link true -g
You can read more about link here: https://docs.npmjs.com/misc/config#link.
Note that you must not include node_modules in your version control when using this option since npm will put symlinks to the globally installed packages in node_modules. The global install location varies from machine to machine, so if node_modules is in version control, it may link to non-existent locations.
Recently I start committing my application node_modules folder into VCS to speed up deployments and fix dependencies.
I noticed that many npm packages contain a bunch of stuff unnecessary to me like tests and various builds that I'll never use and I wrinkle every time when I put it in my repo.
So, what should one put into npm package?
The tests and other items are usually a good item to include in your devDependencies.
You can install packages without them by using npm install --production or setting the configuration flag to production using npm config set production
I would recommend looking at this page and reading the information in the different types of dependencies to get an understanding of what each does.
That being said the bare minimum to include is just what it takes for your module to run but that varies based on the module you're creating. Although a README.md is almost essential if you're sharing your package publicly so users can git a quick overview of your package on npm and github.