For awhile I'm using a private npm repository to publish some of my modules. All is working fine besides one little detail - I cannot make npm show to work.
[Assumptions]
Lets assume that I have a private npm repo # http://my-repo.com:8081/nexus/content/groups/npm/ (yes I'm using Nexus).
Lets assume I have changed my npm registry:
npm set registry http://my-repo.com:8081/nexus/content/groups/npm/
Lets assume I have my-module published to my-repo.
[My intentions]
I want to be able to check the latest (or may be all) version(s) of my-module. However using the standard npm show or npm view commands results in performing a search in npmjs.org and therefore doesn't find any version of my-package.
[Question]
Is there a npm way to see the version of my-package from the described scenario above?
you can use http request.In this case, is http://my-repo.com:8081/nexus/content/groups/npm/[your package name] to get package's information, then parse response object.
You can parse link into browser to see response directly
It is possible!
You can actually call npm show directly like how you normally would.
The determining factor for this to work though, is that you have to be in the same directory where you have your .npmrc file is located (usually at the root of your project) Then npm show will respect that and look it up using your creds and private repo link.
Related
I'm having trouble finding good documentation on how the CI=true effects the npm install command.
How is CI=true npm install different from npm install.
Does the CI=true even effect the install?
I tried it locally and it does NOT cause it to behave like npm ci. I tried running it without the package-lock.json or npm-shrinkwrap.json file present. It created the lock file.
Also, is there a difference between
CI=true npm install
and
CI=true
npm install
and
export CI=true
npm install
Setting the CI environment variable to true effects how npm gathers usage data.
How it effects npm is explained in the Does npm send any information about me back to the registry section of the docs (For convenience, I've provided a verbatim copy of that section below). Consider particularly the description about the Npm-In-CI header.
In summary, setting CI=true causes npm to set the Npm-In-CI header to true, thus as a result the data gathered (by npm) assumes the package(s) are being installed via a "build farm", (i.e. for Continuous Integration purposes), instead of a "human".
The following is what is stated in the docs at the aforementioned link:
Does npm send any information about me back to the registry?
Yes.
When making requests of the registry npm adds two headers with information about your environment:
Npm-Scope – If your project is scoped, this header will contain its scope. In the future npm hopes to build registry features that use this information to allow you to customize your experience for your organization.
Npm-In-CI – Set to “true” if npm believes this install is running in a continuous integration environment, “false” otherwise. This is detected by looking for the following environment variables: CI, TDDIUM, JENKINS_URL, bamboo.buildKey. If you’d like to learn more you may find the original PR interesting. This is used to gather better metrics on how npm is used by humans, versus build farms.
With regards to the specific part of your question, i.e.
Does the CI=true even effect the install?
Generally "No", there is no notable difference to the resultant installation by npm with or without CI=true.
However, a possible scenario whereby the installation of a package by npm could/may be effected, is if a package author defined a postinstall script in package.json that performed different conditional logic if/when the CI environment variable is set to true.
Regarding the last part of your question, i.e.
Also, is there a difference between ...
The accepted answer to this question addresses that.
In summary;
Your first two commands are similar - they both set the environment variable for the current shell only.
However your last example that utilizes export sets the environment variable for the current shell and all processes started from the current shell.
The purpose of this effort is to be able to test whether a package version exists in a private registry, without having to touch the filesystem / config files. For packages in public registries this is perfectly straightforward: npm view lpad#2.0.1 produces some information about that published version, but (as of this writing) npm view lpad#201.0.0 does not have any information or output. I'm using this to infer the existence of packages.
I can also pass a private registry URL to npm view <packagename>, as in npm view <packagename> --registry https://private.registry/path/. This seems to hit the private registry even though it isn't explicitly mentioned in the npm-view documentation (but it's described in the npm-search documentation, so I take this to mean it's a documented API feature).
To be able to talk to private registries at all, I can use an authentication token in the query according to these npm instructions for doing it in a CI/CD workflow: put it into the .npmrc file like this:
//your_registry/:_authToken 12345
Or more securely, //your_registry/:_authToken $TOKEN and set the TOKEN environment variable to 12345 elsewhere.
What I can't figure out how to do is use npm view against a private npm registry, without writing to the .npmrc file.
I plan to be running several queries in parallel from the same machine, so to avoid race conditions in the .npmrc file, I'd rather pass the authentication directly in each command. I assume that with an auth token, this is just a simple curl command but I haven't had much luck finding information on how the NPM API works. (The npm-registry-client doesn't appear to do anything related to view/find; it has access which sets an access level).
Am I missing something blindingly obvious? Where can I find a guide on the request format for view and/or search functions of an NPM registry? What is the curl command that includes sending the auth token, package name, and version and receives some indication of whether it exists?
Found the answer here: https://github.com/npm/registry/blob/master/docs/user/authentication.md
#!/bin/sh
curl -H 'Authorization: Bearer $TOKEN' https://your_registry/$PACKAGE/$VERSION
If the package does not exist, it will return {}. If it does, you'll get the package information.
I have a project, which consists of one root node package containing subpackages linked together by npm link - these subpackages depend on each other (listed in package.json dependencies) and the structure basically looks like this:
-rootpackage
--subpackageA
--subpackageB
Lets say subpackageA has dependency on subpackageB, so I link them to avoid publishing/reinstalling subpackageB in subpackageA after every change in the source of subpackageB.
The link works just fine until I run npm update in subpackageA, which causes the subpackageB to be unlinked.
Now, I see two options:
I can theoretically run the npm link operation after each npm install or npm update to ensure the links are always present. This works with postinstall in case of installation, but in case of an update the postinstall is not called. I don't know any postupdate command for npm, which is to be called after update.
Maybe there is a way to do this more cleverly, perhaps with yarn, which I am also using, in a way, that it kind of prevents unlinking or excludes the update for my subpackages, so I don't lose the links between my subpackages, but right now I am not aware of such a way.
Is there any way to make one of those options work or any other way to solve this problem ? I need to keep this and other links so we don't have to run npm link after every installation/update. I can't really find information about this issue anywhere. Btw I am using Node 6.4.0 and NPM 3.10.3.
So the solution is to use Yarn Workspaces or maybe project like Lerna.
Yarn Workspaces is a utility that expects a structure similar to what was described in the question and which maintains the linking subpackages and root automatically. It is very easy to set up (just 2 lines in root package.json and executing yarn for the first time) and after it you don't have to worry about upgrade or install at all, the links stay in place unless you delete them manually.
Lerna expands on that and provides you with additional tooling for managing multipackage projects. It can use Yarn Workspaces internally for the linking if you use yarn but it is not a requirement and works fine with npm. Just make sure to have Git because last time I checked Lerna didn't work with SVN or other VCSs.
NPM link seems cool, but what are the differences between NPM link and requiring the module by giving the path? Could you please elaborate the advantageous of each?
When you use npm link you can require it like:
var foo = require("foo");
but if you use the path, you require it like:
var foo = require("./lib/foo");
Thanks
Npm link is useful if you are developing some node_module which has dependency to other standalone node_module you are also developing simultaneously (which you then may upload to the npm when it is ready / releasable). Using this setup you will always get the freshest version of "other module" without need to push releases to npm.
It is better than using relative dependencies because relative paths can be individual (per developer), but npm link works as if it was required from npm (located in node_modules folder).
Conclusion: I usually use relative dependencies inside of module itself to require other files and npm link to specify dependencies between simultaneously developed standalone modules.
I'm trying to develop a node app with some supporting libraries. The projects all live in private repositories that are accessed via ssh. Username and password authentication are not an option.
So far I've been adding the git repository url to the package.json
"dependencies": {
"my-library":"git+ssh://git#repo-url:repo-name.git#master"
},
This isn't great as it doesn't lead to reproducible builds and it means that to develop client code and the library at the same time requires pushing for every little change.
npm link appears to be ideal for solving this but running npm link in the library directory, produces the following error:
~/mylibrary$ npm link
npm ERR! Error: EACCES, unlink '/usr/local/lib/node_modules/mockelganger'
etc etc
Fair enough, it's trying to modify a system-global location.
~/mylibrary$ sudo npm link
|it#repo-url's password: -
Where that "|" obscuring the "g" is an animated spinner. I've determined that this is what git does when I try to access a repository as route; for whatever reason it fails to see my ~/.ssh/id_rsa or my ssh agent.
I suppose I could possibly solve this by figuring out how to run an ssh-agent for root but it just doesn't make sense to me that npm link even needs to read from git. It's modifying the configuration of only my computer so it why is it accessing the network? This makes me think I'm doing something else wrong.
The solution is to change the npm prefix to somewhere that doesn't require root permissions.
npm config set prefix ~/npm
Remember to add ~/npm to the path if your installing executables such as CoffeeScript.