My module tree is like this
- modules
- socle1
- stdlib
- socle2
- ntp
How do I include the stdlib module in my site.pp?
I have tried include socle1::stdlib and it is not working .
Should I modify the environment.conf for the directory environment?
If you want to arrange your modules in separate trees, then you may do so. You should then include each base path in your environment's modulepath, and refer to the modules by their regular names. Note in particular that altering the path to a module does not change its name or the names of any of the classes or types it defines -- the path influences only whether the autoloader can find them.
I strongly advise against making subdirectories of the standard module directory, however. Instead, if you want to group modules in multiple directories then create parallel module directories for that purpose:
- modules
- socle1
- stdlib
- socle2
- ntp
Should I modify the environment.conf for the directory environment?
In order to support any module directories beyond or instead of the default, yes, you should. The puppet documentation describes how to configure your environment's modulepath. But do consider following #MattSchuchard's advice and instead restricting yourself to the standard module directories.
You are not supposed to put modules inside of other modules. Your tree should be like:
- modules
- socle1
- stdlib
- socle2
- ntp
Also, you would very rarely include stdlib, because stdlib is almost entirely a type/function module, so you would only reference its types and functions. You would not be declaring its classes unless you were planning on using the stages functionality it provides (thanks to John Bollinger for corrections to this paragraph).
However, declaring the ntp module in your site.pp is as simple as:
include ntp
or:
class { 'ntp': }
inside of your node { }.
In your init.pp
class classname ( parameters ) {
include ::socle2::ntp
}
Try this and tell me if this works or not!
Related
I am building a custom binary of NodeJS from the latest code base for an embedded system. I have a couple modules that I would like to ship as standard with the binary - or even run a custom script the is compiled into the binary and can be invoked through a command line option.
So two questions:
1) I vaguely remember that node allowed to include custom modules during build time but I went through the latest 5.9.0 configure script and I can't see anything related - or maybe I am missing it.
2) Did someone already do something similar? If yes, what were the best practices you came up with?
I am not looking for something like Electron or other binary bundlers but actually building into the node binary.
Thanks,
Andy
So I guess I figure it out much faster that I thought.
For anyone else, you can add any NPM module to it and just add the actual source files to the node.gyp configuration file.
Compile it and run the custom binary. It's all in there now.
> var cmu = require("cmu");
undefined
> cmu
{ version: [Function] }
> cmu.version()
'It worked!'
> `
After studying this for quite a while, I have to say that the flyandi's answer is not quite true. You cannot add any NPM module just by adding it to the node.gyp.
You can only add pure JavaScript modules this way. To be able to embed a C++ module (I deliberately don't use the word "native", because that one is quite ambiguous in nodeJS terminology - just look at the sources).
To summarize this:
To embed a JS module to your custom nodejs, just add it in the library_files section of the node.gyp file. Also note that it should be placed within the lib folder, otherwise you'll have troubles requiring the module. That's because the name/path listed in node.gyp / library_files is used to encode the id of the module in the node_javascript.cc intermediate file which is then used when searching for the built-in modules.
To embed a native module is much more difficult. The best way I have found so far is to build the module as a static library instead of dynamic, which for cmake(-js) based module you can achieve by changing the SHARED parameter to STATIC like this:
add_library(${PROJECT_NAME} STATIC ${SRC})
instead of:
add_library(${PROJECT_NAME} SHARED ${SRC})
And also changing the suffix:
set_target_properties(
${PROJECT_NAME}
PROPERTIES
PREFIX ""
SUFFIX ".lib") /* instead of .node */
Then you can link it from node.gyp by adding this section:
'link_settings': {
'libraries' : [
"path/to/my/library.lib",
#...add other static dependencies
],
},
(how to do this with node-gyp based project should be quite ease to google)
This allows you to build the module, but you won't be able to require it, because require() function in node can only be used to load built-in JS modules, external JS modules or external dynamic node modules. But now we have a built-in C++ module. Well, lot of node integrated modules are C++, but they always have a JS wrapper in /lib, and those wrappers they use process.binding() to load the C++ module. That is, process.binding() is sort of a require() function for integrated C++ modules.
That said, we also need to call require.binding() instead of require to load our integrated module. To be able to do that, we have to make our module "built-in" first.
We can do that by replacing
NODE_MODULE(mymodule, InitAll)
int the module definition with
NODE_BUILTIN_MODULE_CONTEXT_AWARE(mymodule, InitAll)
which will register it as internal module and from now on we can process.binding() it.
Note that NODE_BUILTIN_MODULE_CONTEXT_AWARE is not defined in node.h as NODE_MODULE but in node_internals.h so you either have to include that one, or copy the macro definition over to your cpp file (the first one is of course better because the nodejs API tends to change quite often...).
The last thing we need to do is to list our newly integrated module among the others so that the node knows to initialize them (that is include them within the list of modules used when searching for the modules loaded with process.binding()). In node_internals.h there is this macro:
#define NODE_BUILTIN_STANDARD_MODULES(V) \
V(async_wrap) \
V(buffer) \
V(cares_wrap) \
...
So just add the your module to the list the same way as the others V(mymodule).
I might have forgotten some step, so ask in the comments if you think I have missed something.
If you wonder why would anyone even want to do this... You can come up with several reasons, but here's one most important to me: Those package managers used to pack your project within one executable (like pkg or nexe) work only with node-gyp based modules. If you, like me, need to use cmake based module, the final executable won't work...
I have a puppet module that uses gini-archive. Recently I change my module to depend on biemond-wildfly, which depends on nanliu-archive.
However, I can't install nanliu-archive, because both of these archive modules install into a directory called archive. This, I believe, violates the puppet module requirements, as they should both install into directories called <username>-archive.
However, even if I put them in different directories, I still have a problem. Both classes are called archive (actually one is a class and one is a define, but I don't think that's too important right now), so when my module says include archive, puppet isn't going to know which one I want.
Note I have a java background where every class is in a package hierarchy which prevents these kind of issues, but I can't see any equivalent for puppet.
I know I could have a whole load of different modules directories (/etc/puppet/modules, /etc/puppet/modules2 etc), but puppet still seems to look through these in order, meaning it will always load the archive class from the first module directory in the list.
Is there any way of solving this or have I reached the limit of what puppet can do? I'd rather not have to fork every single module and change the class names, that seems to defeat the point of the forge.
Thanks.
The name of the directory the module is in must be archive, the username is only used for the purpose of distributing and packaging modules but is not used by puppet while autoloading. Basically, what you are seeing is correct.
There seems to be two ways of handling this:
Fork one of the two archive modules and rename the module so that it does not collide
Fork one of the modules using the archive modules and migrate it to use the same archive module as the other one. Since the two archive modules do almost the same thing, I prefer this method.
I just did this so I'm going expand a bit on option (1) in #ChrisPitman's answer by including more details using a module I just forked & renamed as an example.
(Unfortunately) the simplest solution is to fork one of the modules and rename it. Below is an example using puppet/selinux and thias/selinux which have a namespace collision at selinux. The following steps were taken to re-namespace the thias/selinux module into the namespace selinux_thias:
Fork the module. In this example I have created USF-IMaRS/puppet-selinux from thias/puppet-selinux.
Install the module into modules/$NEW_NAME. Using git submodules this is: git submodule add https://github.com/USF-IMARS/puppet-selinux modules/selinux_thias
rename the module class(es). Here is a commit demonstrating what this basically looks like.
modify modules using thias/selinux to use new name selinux_thias instead of selinux.
In RequireJS documents (http://requirejs.org/docs/api.html#modulename), I couldn't understand this sentence.
You can explicitly name modules yourself, but it makes the modules less portable
My question is
Why explicitly naming module makes less portable?
When explicitly naming module needed?
Why explicitly naming module makes less portable?
If you do not give the module a name explicitly, RequireJS is free to name it whichever way it wants, which gives you more freedom regarding the name you can use to refer to the module. Let's say you have a module the file bar.js. You could give RequireJS this path:
paths: {
"foo": "bar"
}
And you could load the module under the name "foo". If you had given a name to the module in the define call, then you'd be forced to use that name. An excellent example of this problem is with jQuery. It so happens that the jQuery developers have decided (for no good reason I can discern) to hardcode the module name "jquery" in the code of jQuery. Once in a while someone comes on SO complaining that their code won't work and their paths has this:
paths: {
jQuery: "path/to/jquery"
}
This does not work because of the hardcoded name. The paths configuration has to use the name "jquery", all lower case. (A map configuration can be used to map "jquery" to "jQuery".)
When explicitly naming module needed?
It is needed when there is no other way to name the module. A good example is r.js when it concatenates multiple modules together into one file. If the modules were not named during concatenation, there would be no way to refer to them. So r.js adds explicit names to all the modules it concatenates (unless you tell it not to do it or unless the module is already named).
Sometimes I use explicit naming for what I call "glue" or "utility" modules. For instance, suppose that jQuery is already loaded through a script element before RequireJS but I also want my RequireJS modules to be able to require the module jquery to access jQuery rather than rely on the global $. If I ever want to run my code in a context where there is no global jQuery to get, then I don't have to modify it for this situation. I might have a main file like this:
define('jquery', function () {
return $;
});
require.config({ ... });
The jquery module is there only to satisfy modules that need jQuery. There's nothing gained by putting it into a separate file, and to be referred to properly, it has to be named explicitly.
Here's why named modules are less portable, from Sitepen's "AMD, The Definite Source":
AMD is also “anonymous”, meaning that the module does not have to hard-code any references to its own path, the module name relies solely on its file name and directory path, greatly easing any refactoring efforts.
http://www.sitepen.com/blog/2012/06/25/amd-the-definitive-source/
And from Addy Osmani's "Writing modular javascript":
When working with anonymous modules, the idea of a module's identity is DRY, making it trivial to avoid duplication of filenames and code. Because the code is more portable, it can be easily moved to other locations (or around the file-system) without needing to alter the code itself or change its ID. The module_id is equivalent to folder paths in simple packages and when not used in packages. Developers can also run the same code on multiple environments just by using an AMD optimizer that works with a CommonJS environment such as r.js.
http://addyosmani.com/writing-modular-js/
Why one would need a explicitly named module, again from Addy Osmani's "Writing modular javascript":
The module_id is an optional argument which is typically only required when non-AMD concatenation tools are being used (there may be some other edge cases where it's useful too).
I was wondering whether Node.js/npm include any kind of exension mechanism comparable to Python setuptools' "entry points".
So, in short:
is there any way I can do dynamic discovery of services provided by other packages using npm?
if not, what would be the best way to implement something similar? Specifying the extension name in the main module's configuration file seems to be the logical solution, but I wonder whether something "automatic" can be done.
I'm not aware of any builtin mechanism to do this.
One viable way of doing it yourself:
I made a small tool (Jumpstart) to quickly create project scaffolding from templates with placeholders, and I used a kind of plugin mechanism for that. It basically comes down to that the Jumpstart script searches for modules named jumpstart-* "adjacent" to where the module itself is installed. So it would work for both local and global installations. If installed locally, it would search the other local modules (on the same level) and if global, it searches the other global modules.
Note that here, "search" comes down to a simple fs.exists check to see if there's a Jumpstart template module with a particular name installed. However, nothing would stand in the way to actually get a full list of all installed packages matching the jumpstart-* pattern, and loading all at once. I could also search up the entire directory tree for node_modules directories and do the same. There's no point in doing this for this particular program, however.
See https://npmjs.org/package/jumpstart for docs.
The only limitation to this technique is that all modules must be named in a consistent fashion. Start with some string, end with some string, something like that. Any rogue packages polluting the namespace could be detected by doing further checks on a package contents: What files does it contain? What kind of object does its main module export? etc.
Brunch also uses a plugin mechanism. This one actually deals with file extensions, so is more relevant: https://github.com/brunch/brunch/wiki/Plugins . See for example source of the CoffeeScript plugin https://github.com/brunch/coffee-script-brunch/blob/master/src/index.coffee .
I'm using require-ts (https://github.com/iammerrick/require-ts) to import a typescript file with require.js. I need to import several .d.ts files to satisfy the compiler. When I specify the declarations inside my require.config, they are resolved relative to the module importing them. This is because require-ts calls parentRequire on the declarations. However, I want to resolve the declarations globally, relative to my baseURL, as the definitions have nothing to do with the specific module. What is the best way to do this? I'm absolutely fine with modifying the require-ts if that is what's needed.
Per module id / resource URI resolution rules, the resource ID is NOT resolved relative the current container/module, unless that resource ID starts with "." char. ("./" or "../")
I am not aware of formal AMD / RequireJS article on ID normalization / resolution, but the following two may help:
module ID resolution / normalization
general language on define statement
In other words, "stop using . as the starting char of the resource and the resource should be resolved against the AMD tree root."