Add custom metadata or config to package.json, is it valid? - node.js

I have seen (don't remember where) a package.json file with custom keys starting with an underscore:
{
"name": "application-name"
, "version": "0.0.1"
, "private": true
, "dependencies": {
"express": "2.4.7"
, "jade": ">= 0.0.1"
}
, "_random": true
}
Are you allowed to do this? Is it still valid? If this is allowed, is there any documentation on the rules?
Thanks!

tl;dr:
Yes, you're allowed to add custom entries to package.json.
Choose a key name:
not already defined (details below)
not reserved for future use (details below)
avoid prefixes _ and $
and preferably use a single top-level key in which to nest your custom entries.
E.g., if you own domain example.org, you could store a custom random key as follows, inside a top-level key in reverse-domain-name notation with _ substituted for . and, if applicable, -(see comments) (e.g., org_example):
{
"name": "application-name"
, "version": "0.0.1"
, "private": true
, "dependencies": {
"express": "2.4.7"
, "jade": ">= 0.0.1"
}
, "org_example": {
"random": true
}
}
To read such custom properties, use the following technique:
require("./package.json").org_example.random // -> true
npm's package.json file format mostly complies with the CommonJS package specification:
keys that npm currently uses: https://docs.npmjs.com/files/package.json
keys defined in the spec: http://wiki.commonjs.org/wiki/Packages/1.1
As for choosing custom keys: the CommonJS package specification states (emphasis mine):
The following fields are reserved for future expansion: build, default, email, external, files, imports, maintainer, paths, platform, require, summary, test, using, downloads, uid.
Extensions to the package descriptor specification should strive to avoid collisions for future standard names by name-spacing their properties with innocuous names that do not have meanings relevant to general package management.
The following fields are reserved for package registries to use at their discretion: id, type.
All properties beginning with _ or $ are also reserved for package registries to use at their discretion.

Given the nature of JSON and this statement from the Nodejitsu documentation I don't see anything wrong with that.
NPM itself is only aware of two fields in the package.json:
{
"name" : "barebones",
"version" : "0.0.0",
}
NPM also cares about a couple of fields listed here. So as long as it is valid JSON and doesn't interfere with Node.js or NPM everything should be alright and valid.
Node's awareness of package.json files seems extends to the main field. Ref.
{ "name" : "some-library",
"main" : "./lib/some-library.js" }
If this was in a folder at ./some-library, then
require('./some-library') would attempt to load
./some-library/lib/some-library.js.
This is the extent of Node's awareness of package.json files.
To avoid possible conflicts you should prefixing your keys with some character or word. It is not recommended to use an underscore (_) or dollar sign ($) as those are reserved character prefixes, but other choices are viable.

Related

Configure ESLint to error when objects are defined with certain keys

I know of the no-restricted-properties option that allows setting up rules to error when accessing certain object keys (to discourage use of deprecated APIs and the like), but I cannot find a rule to disallow setting of certain keys.
Is this possible in ESLint?
To explain further, our project uses Sequelize ORM which uses the keyword allowNull for nullable columns, and we often copy our Sequelize model definitions directly into node-pg-migrate migration files, which uses the subtly different notNull keyword.
I always forget to change the object key in a definition from allowNull to notNull and would like a way to check this in the linter in a directory specific .eslintrc file.
I found that the similarly named no-restricted-syntax rule allows you to exclude pretty much anything you can find using Javascript AST selectors. Using the very helpful AST Explorer web tool, I was able to add a .eslintrc file in the directory with our database migrations with a single rule to error when objects have the key allowNull:
{
"rules": {
"no-restricted-syntax": [
"error", "Identifier[name='allowNull']",
]
}
}

How to handle validation scripts distributed in diferents files in NodeJS?

I have several modules that export a function that return true or false when the payload is valid or not. I also have a config file in JSON in which I specify the name of the validator script to use depending on the payload type:
[
{
"boardVersion": "1",
"availableInterfaces": [
{ "name": "digital", "validator": "digitalV1" },
{ "name": "analog", "validator": "analogV1" },
]
}
]
And for example inside digitalv1.js I have something like:
import validator from 'validator';
module.exports = (value) => {
validator.isInt(value, {min: 0, max: 3});
};
And finally, a controller that gets the payload from ExpressJS and depending on the endpoint called it decides what validator to use. The thing now is how can I load the validator in the controller.
There are 2 approaches that come to my mind:
In the controller, I require every validator and push them in a key-value array (or object), in which the key is the name of the validator and the value the validator itself.
Instead of defining a validator name in the JSON file I could just put the file path in the file system and just require the file when I need it.
I there a better/cleaner way to approach this? Feel free to suggest even a different architecture. The idea though is to keep validators separated for code cleanness sake.
You can use option 1, but with a "barrel module", which imports the validators, and attaches them onto an array or object. Then, any module that needs the validators can just import the main module, and use the provided lookup function, index, or key to get the validator they need.
Be careful of circular dependencies: if you have them, you can use module.exports.x = and Node will handle them correctly (Note: I'm not sure if this is necessary with ES6 modules, which handle circular dependencies a little differently; just something to be on guard for).
Since the knowledge to create this "barrel module" already exists in your JSON, you could watch your JSON file in your build process, and automatically generate this barrel module if any of the file associations change, throw an error if a specified module is not found at a certain file path, etc.

node-config multiple configuration files

I am looking at this https://github.com/lorenwest/node-config, and it seems to do everything I need. However I am wondering if it's possible to specify multiple config files based on their categories.
Is this possible to do this?
./config
default-aws.json or aws-default.json
production-aws.json or aws-production.json
db-default.json
db-production.json
etc..
so the config files can be smaller? I know we could make a giant config that has all of those required in different sections. eg
{
"aws": {
"kinesis": "my-stream"
....
},
"db": {
"host": "my-host"
...
}
}
Anyone has any ideas if this is doable using node-config or different library that works similar to node-config?
Thanks & regards
Tin
The short answer is NO. node-config doesn't support this (As #Mark's response).
The simplest way to do this using node-config is to use JavaScript as config files (https://github.com/lorenwest/node-config/wiki/Configuration-Files#javascript-module---js). This way you still get most of the benefits of using node-config. Then you can simply include other files inside them (https://github.com/lorenwest/node-config/wiki/Special-features-for-JavaScript-configuration-files#including-other-files)
Note that it would be a bit harder to use the multiple config files and overrides for the inner files.
A slightly more complex implementation can use the utility class in config which actually allows you to directly read a specific folder using the same patterns that node-config uses internally (https://github.com/lorenwest/node-config/wiki/Using-Config-Utilities#loadfileconfigsdirectory).
In that case you would probably want to combine all the files to a single config by setting them on the config object before the first call to get (https://github.com/lorenwest/node-config/wiki/Configuring-from-an-External-Source).
I'm a maintainer of node-config. The next version will support a feature to declare multiple NODE_ENV values, separated by a comma.
So if you were doing development in the cloud, you could declare a "development" environment followed by a "cloud" environment.
First the development config would be loaded, followed by the "cloud" config.
The related pull request is #486.
I use nconf. It lets you read multiple configuration files into the same object. Here is an example:
var nconf = require('nconf');
//read the config files; first parameter is a required key
nconf.file('aws', {file: 'default-aws.json'});
nconf.file('db', {file: 'db-default.json'});
console.log(nconf.get('aws:kinesis'));
console.log(nconf.get('db:host'));
default-aws.json:
{
"aws": {
"kinesis": "my-stream"
}
}
db-default.json:
{
"db": {
"host": "my-host"
}
}

RequireJS Map Configuration of Multiple files

I have some files, like,
test/test1,
test/test2,
test/test3
and i want to rename there path to
newtest/test1,
newtest/test2,
newtest/test3
so that if we try to require the above file, then it will point to new path.
In require, one to one mapping is present, but not sure, if something like this is achievable,
require.map = {
"test/*": "newtest/*",
}
Any help :)
The map configuration option supports mapping module prefixes. Here's an example:
require.config({
map: {
"*": {
"foo": "bar"
}
}
});
define("bar/x", function () {
return "bar/x";
})
require(["foo/x"], console.log);
The last require call that tries to load foo/x will load bar/x.
This is as good as it gets for pattern matching with map. RequireJS does not support putting arbitrary patterns in there. Using "test/*": "newtest/*" would not work. The "*" I used in map is not a pattern. It is a hardcoded value that means "in all modules", and happens to look like a glob pattern. So the map above means "in all modules, when foo is required, load bar instead".
I wonder if you really need map. You can probably just use paths. Quoting from map documentation
This sort of capability is really important for larger projects which
may have two sets of modules that need to use two different versions
of 'foo', but they still need to cooperate with each other.
Also see paths documentation
Using the above sample config, "some/module"'s script tag will be
src="/another/path/some/v1.0/module.js".
Here is how paths can be used to map the directory from test to newtest for your example
paths: {
"test": "newtest"
}

Why are all my hosts installing a package defined for one host on the puppetmaster

I'm new to puppet and I'm trying to figure out how to get different hosts installing different packages, but I've stumbled upon an issue I can't figure out. These are my manifests:
My site.pp:
node default {
}
node 'debh3' inherits default {
}
node 'debh4' inherits default {
import "db"
}
node 'master' inherits default {
}
My db.pp:
package { 'mysql-server':
ensure => installed
}
service { 'mysql':
ensure => true,
enable => true,
require => Package['mysql-server']
}
With this setup, mysql-server is being installed on debh3.
If I replace the "import db" with the actual code inside my db.pp, then mysql-server is only installed on debh4 (which is the behaviour i'm after).
Does anyone have a clue what I'm doing wrong here? I've put it all in site.pp to ensure there are no other dependencies affecting anything.
Also note that the import statement is deprecated and about to be removed from Puppet 4.0.
You should move your code to modules. In this case, you want to create a db module.
In /etc/puppet/modules/db/manifests/install.pp
class db::install {
package { 'mysql-server':
ensure => installed
}
}
an in /etc/puppet/modules/db/manifests/service.pp
class db::service {
include db::install
service { 'mysql':
ensure => true,
enable => true,
require => Class['db::install'],
}
}
From you node block, you can then just
include db::install
include db::service
or even just include db::service.
You could have both resources in one class, but it's good practice to structure your code through multiple classes.
Upon further digging, I found this in the "import" documentation at https://docs.puppetlabs.com/puppet/latest/reference/lang_import.html:
Import statements have the following characteristics:
They read the contents of the requested file(s) and add their code to top scope
They are processed before any other code in the manifest is parsed
They cannot be contained by conditional structures or node/class definitions
These quirks mean the location of an import statement in a manifest does not matter.
This points to why what I was doing was incorrect and why it caused the behaviour. As for a solution, I will look into best practices and determine the "correct" way to structure my manifests.

Resources