I was wondering whether Node.js/npm include any kind of exension mechanism comparable to Python setuptools' "entry points".
So, in short:
is there any way I can do dynamic discovery of services provided by other packages using npm?
if not, what would be the best way to implement something similar? Specifying the extension name in the main module's configuration file seems to be the logical solution, but I wonder whether something "automatic" can be done.
I'm not aware of any builtin mechanism to do this.
One viable way of doing it yourself:
I made a small tool (Jumpstart) to quickly create project scaffolding from templates with placeholders, and I used a kind of plugin mechanism for that. It basically comes down to that the Jumpstart script searches for modules named jumpstart-* "adjacent" to where the module itself is installed. So it would work for both local and global installations. If installed locally, it would search the other local modules (on the same level) and if global, it searches the other global modules.
Note that here, "search" comes down to a simple fs.exists check to see if there's a Jumpstart template module with a particular name installed. However, nothing would stand in the way to actually get a full list of all installed packages matching the jumpstart-* pattern, and loading all at once. I could also search up the entire directory tree for node_modules directories and do the same. There's no point in doing this for this particular program, however.
See https://npmjs.org/package/jumpstart for docs.
The only limitation to this technique is that all modules must be named in a consistent fashion. Start with some string, end with some string, something like that. Any rogue packages polluting the namespace could be detected by doing further checks on a package contents: What files does it contain? What kind of object does its main module export? etc.
Brunch also uses a plugin mechanism. This one actually deals with file extensions, so is more relevant: https://github.com/brunch/brunch/wiki/Plugins . See for example source of the CoffeeScript plugin https://github.com/brunch/coffee-script-brunch/blob/master/src/index.coffee .
Related
I have been working on a Clojure/ClojureScript project and something intrigues me.
On the shadow-cljs.edn file, there is a declaration of the
dependencies. As you might see below, some of them have "a full name"
declaration, indicated as username/repository-name. An example is
venantius/accountant.
Others are declared only as repository-name, such as [bidi "2.1.5"] which is actually published by juxt user (source).
I am afraid this could be problematic since multiple users could create repositories with the same name:
{:source-paths ["src" "dev" "test"]
:dependencies [
;; for deploy w lein deps below need to be in project.cljs
;; third-party dependencies
[venantius/accountant "0.2.5"]
[bidi "2.1.5"]
[cljs-hash "0.0.2"]
[clova "0.46.0"]
[com.andrewmcveigh/cljs-time "0.5.2"]
[org.clojure/core.match "1.0.0"]
[binaryage/dirac "RELEASE"]
[com.pupeno/free-form "0.6.0"]
[garden "1.3.10"]
[hickory "0.7.1"]
[metosin/malli "0.8.4"]
[medley "1.4.0"]
[binaryage/oops "0.7.0"]
[djblue/portal "0.16.1"]
[djblue/portal "0.18.0"]
[proto-repl "0.3.1"]
[reagent "1.1.0"]
[re-frame "1.2.0"]
[district0x/re-frame-window-fx "1.1.0"]
[cljsjs/react-beautiful-dnd "12.2.0-2"]
I am not sure how the low-level of dependency installation goes in a Clojure/ClojureScript project.
Is it a bad practice to have only the brief name of dependency? Is an ambiguity problem feasible or even possible?
Until not too long ago it was allowed to publish dependencies to https://clojars.org without a group name. In those cases the group would become identical to the artifact id. So bidi is effectively bidi/bidi.
Nowadays, new packages may only be published with a specific group name. However, old packages may continue using their older name.
The names used to publish also do not need to match their github repo coordinates. These are separate systems. They often match but are not required to.
To anwer your question: You should avoid using the same dependency multiple times. And you should use the official published name for each library. Some libraries are still updated using their old identifiers. Some moved to the newer longer names, while the old ones are still available but no longer receiving updates. Always consult the documentation of the specific libs to be sure which one you are supposed to use. They'll usually have some kind of info in their READMEs.
Conflicts may happen if you get the "same" lib via different identifiers. These may be very difficult to identify, when you run into trouble. This is true for any dependency resolver your use (eg. project.clj, deps.edn, shadow-cljs.edn). Best practice is to keep your dependencies as clean as possible.
I am building a custom binary of NodeJS from the latest code base for an embedded system. I have a couple modules that I would like to ship as standard with the binary - or even run a custom script the is compiled into the binary and can be invoked through a command line option.
So two questions:
1) I vaguely remember that node allowed to include custom modules during build time but I went through the latest 5.9.0 configure script and I can't see anything related - or maybe I am missing it.
2) Did someone already do something similar? If yes, what were the best practices you came up with?
I am not looking for something like Electron or other binary bundlers but actually building into the node binary.
Thanks,
Andy
So I guess I figure it out much faster that I thought.
For anyone else, you can add any NPM module to it and just add the actual source files to the node.gyp configuration file.
Compile it and run the custom binary. It's all in there now.
> var cmu = require("cmu");
undefined
> cmu
{ version: [Function] }
> cmu.version()
'It worked!'
> `
After studying this for quite a while, I have to say that the flyandi's answer is not quite true. You cannot add any NPM module just by adding it to the node.gyp.
You can only add pure JavaScript modules this way. To be able to embed a C++ module (I deliberately don't use the word "native", because that one is quite ambiguous in nodeJS terminology - just look at the sources).
To summarize this:
To embed a JS module to your custom nodejs, just add it in the library_files section of the node.gyp file. Also note that it should be placed within the lib folder, otherwise you'll have troubles requiring the module. That's because the name/path listed in node.gyp / library_files is used to encode the id of the module in the node_javascript.cc intermediate file which is then used when searching for the built-in modules.
To embed a native module is much more difficult. The best way I have found so far is to build the module as a static library instead of dynamic, which for cmake(-js) based module you can achieve by changing the SHARED parameter to STATIC like this:
add_library(${PROJECT_NAME} STATIC ${SRC})
instead of:
add_library(${PROJECT_NAME} SHARED ${SRC})
And also changing the suffix:
set_target_properties(
${PROJECT_NAME}
PROPERTIES
PREFIX ""
SUFFIX ".lib") /* instead of .node */
Then you can link it from node.gyp by adding this section:
'link_settings': {
'libraries' : [
"path/to/my/library.lib",
#...add other static dependencies
],
},
(how to do this with node-gyp based project should be quite ease to google)
This allows you to build the module, but you won't be able to require it, because require() function in node can only be used to load built-in JS modules, external JS modules or external dynamic node modules. But now we have a built-in C++ module. Well, lot of node integrated modules are C++, but they always have a JS wrapper in /lib, and those wrappers they use process.binding() to load the C++ module. That is, process.binding() is sort of a require() function for integrated C++ modules.
That said, we also need to call require.binding() instead of require to load our integrated module. To be able to do that, we have to make our module "built-in" first.
We can do that by replacing
NODE_MODULE(mymodule, InitAll)
int the module definition with
NODE_BUILTIN_MODULE_CONTEXT_AWARE(mymodule, InitAll)
which will register it as internal module and from now on we can process.binding() it.
Note that NODE_BUILTIN_MODULE_CONTEXT_AWARE is not defined in node.h as NODE_MODULE but in node_internals.h so you either have to include that one, or copy the macro definition over to your cpp file (the first one is of course better because the nodejs API tends to change quite often...).
The last thing we need to do is to list our newly integrated module among the others so that the node knows to initialize them (that is include them within the list of modules used when searching for the modules loaded with process.binding()). In node_internals.h there is this macro:
#define NODE_BUILTIN_STANDARD_MODULES(V) \
V(async_wrap) \
V(buffer) \
V(cares_wrap) \
...
So just add the your module to the list the same way as the others V(mymodule).
I might have forgotten some step, so ask in the comments if you think I have missed something.
If you wonder why would anyone even want to do this... You can come up with several reasons, but here's one most important to me: Those package managers used to pack your project within one executable (like pkg or nexe) work only with node-gyp based modules. If you, like me, need to use cmake based module, the final executable won't work...
I have a puppet module that uses gini-archive. Recently I change my module to depend on biemond-wildfly, which depends on nanliu-archive.
However, I can't install nanliu-archive, because both of these archive modules install into a directory called archive. This, I believe, violates the puppet module requirements, as they should both install into directories called <username>-archive.
However, even if I put them in different directories, I still have a problem. Both classes are called archive (actually one is a class and one is a define, but I don't think that's too important right now), so when my module says include archive, puppet isn't going to know which one I want.
Note I have a java background where every class is in a package hierarchy which prevents these kind of issues, but I can't see any equivalent for puppet.
I know I could have a whole load of different modules directories (/etc/puppet/modules, /etc/puppet/modules2 etc), but puppet still seems to look through these in order, meaning it will always load the archive class from the first module directory in the list.
Is there any way of solving this or have I reached the limit of what puppet can do? I'd rather not have to fork every single module and change the class names, that seems to defeat the point of the forge.
Thanks.
The name of the directory the module is in must be archive, the username is only used for the purpose of distributing and packaging modules but is not used by puppet while autoloading. Basically, what you are seeing is correct.
There seems to be two ways of handling this:
Fork one of the two archive modules and rename the module so that it does not collide
Fork one of the modules using the archive modules and migrate it to use the same archive module as the other one. Since the two archive modules do almost the same thing, I prefer this method.
I just did this so I'm going expand a bit on option (1) in #ChrisPitman's answer by including more details using a module I just forked & renamed as an example.
(Unfortunately) the simplest solution is to fork one of the modules and rename it. Below is an example using puppet/selinux and thias/selinux which have a namespace collision at selinux. The following steps were taken to re-namespace the thias/selinux module into the namespace selinux_thias:
Fork the module. In this example I have created USF-IMaRS/puppet-selinux from thias/puppet-selinux.
Install the module into modules/$NEW_NAME. Using git submodules this is: git submodule add https://github.com/USF-IMARS/puppet-selinux modules/selinux_thias
rename the module class(es). Here is a commit demonstrating what this basically looks like.
modify modules using thias/selinux to use new name selinux_thias instead of selinux.
I'd like to share how I implemented a solution to a problem I had, to get some feedback and maybe learn some new feature of buildbot.
Scenario:
Create a package of a given software, and upload the package to the buildmaster into a shared folder.
The package name contains some data that are known to the build system (i.e. Makefiles) specifically the sw version. Let's assume the package name is:
myapp-1.2.3-r2435.tar.gz
Question:
How do I send to the buildslave steps the required to build up the very same package name, so that the buildslave can upload the package?
Specifically I need to know the version number (but I guess this could be any param)
Implemented (and working) solution:
The makefile, once the compilation process is completed, writes a file with the required param.
The slave uses the SetProperty() step to read the content of the file into a custom named property
Once I have the value of interest in the property (let's say APP_VERSION) I use it to build the package name with the same pattern used by the build system.
The described solution works, but I do not really like it because:
1) it's complicated, hence, I guess, fragile
2) it is not OS independent (I use "echo $VAR > file" to write the file, and "cat file" to read it and set the buildslave Property)
Is there in your opinion a better way to solve this issue?
Do you have any suggestion to make the solution OS independent? (It will not work for sure on Windows, while my package shoudl be built on Windows OS too)
In this instance I'm using c with autoconf, but the question applies elsewhere.
I have a glade xml file that is needed at runtime, and I have to tell the application where it is. I'm using autoconf to define a variable in my code that points to the "specified prefix directory"/app-name/glade. But that only begins to work once the application is installed. What if I want to run the program before that point? Is there a standard way to determine what paths should be checked for application data?
Thanks
Thanks for the responses. To clarify, I don't need to know where the app data is installed (eg by searching in /usr,usr/local,etc etc), the configure script does that. The problem was more determining whether the app has been installed yet. I guess I'll just check in install location first, and if not then in "./src/foo.glade".
I dont think there's any standard way on how to locate such data.
I'd personally do it in a way that i'd have a list of paths and i'd locate if i can find the file from anyone of those and the list should containt the DATADIR+APPNAME defined from autoconf and CURRENTDIRECTORY+POSSIBLE_PREFIX where prefix might be some folder from your build root.
But in any case, dont forget to use those defines from autoconf for your data files, those make your software easier to package (like deb/rpm)
There is no prescription how this should be done in general, but Debian packagers usually installs the application data somewhere in /usr/share, /usr/lib, et cetera. They may also patch the software to make it read from appropriate locations. You can see the Debian policy for more information.
I can however say a few words how I do it. First, I don't expect to find the file in a single directory; I first create a list of directories that I iterate through in my wrapper around fopen(). This is the order in which I believe the file reading should be done:
current directory (obviously)
~/.program-name
$(datadir)/program-name
$(datadir) is a variable you can use in Makefile.am. Example:
AM_CPPFLAGS = $(ASSERT_FLAGS) $(DEBUG_FLAGS) $(SDLGFX_FLAGS) $(OPENGL_FLAGS) -DDESTDIRS=\"$(prefix):$(datadir)/:$(datadir)/program-name/\"
This of course depends on your output from configure and how your configure.ac looks like.
So, just make a wrapper that will iterate through the locations and get the data from those dirs. Something like a PATH variable, except you implement the iteration.
After writing this post, I noticed I need to clean up our implementation in this project, but it can serve as a nice start. Take a look at our Makefile.am for using $(datadir) and our util.cpp and util.h for a simple wrapper (yatc_fopen()). We also have yatc_find_file() in case some third-party library is doing the fopen()ing, such as SDL_image or libxml2.
If the program is installed globally:
/usr/share/app-name/glade.xml
If you want the program to work without being installed (i.e. just extract a tarball), put it in the program's directory.
I don't think there is a standard way of placing files. I build it into the program, and I don't limit it to one location.
It depends on how much customising of the config file is going to be required.
I start by constructing a list of default directories and work through them until I find an instance of glade.xml and stop looking, or not find it and exit with an error. Good candidates for the default list are /etc, /usr/share/app-name, /usr/local/etc.
If the file is designed to be customizable, before I look through the default directories, I have a list of user files and paths and work through them. If it doesn't find one of the user versions, then I look in the list of default directories. Good candidates for the user config files are ~/.glade.xml or ~/.app-name/glade.xml or ~/.app-name/.glade.xml.