How to write an NPM module that uses multiple files? - node.js

I have a simple local command-line module that I want to use in 2 different ways (basically different defaults), but it uses the same core logic, so I want to extract that logic into a third entity and use that from the two entry points.
I have everything working with two bin command scripts, but each file has its own copy of the logic to run, and I am not sure how to pull this duplicated code out into a third file within the same module. I figure I could do it by creating an entire separate module and loading it with require(), but I would rather just keep it together since it's tightly coupled.
The structure is like this:
bin\
cmdone.js
cmdtwo.js
core.js
package.json
I would like to move the logic, which currently exists in both cmdone.js and cmdtwo.js, into core.js and reference it from the two files in bin. Is this possible?

If i understand your question correct, then what you need is require function of nodejs

Well, after some more poking around, I discovered that this works:
const test = require('../core.js');
I suppose I misunderstood the distinction between Node modules and NPM packages. I was basically equating the two, but it seems that you can create and use modules entirely within packages, they don't have to be one-to-one.

Related

How to avoid double creating directories in /proc?

I'm writing a Linux kernel module, and I'd like to create a subdirectory, /proc/foo/, and then expose several artificial files inside it that will be generated on the fly by my module. I know I can use proc_mkdir to create the foo directory, but if it already exists dmesg will display a warning, and I'd prefer to keep the log clean.
Now you might think on a module teardown it should be removing the /proc/foo/ tree so that a redundant mkdir should never happen. But I'm working on a series of related kernel modules, and I figured I'd have each of them separately expose files in /proc/foo/. Maybe this is atypical? I don't see any functions in proc_fs.h for querying existing files so maybe I'm going about this wrong?
Another option would be to have a module that just creates the directory, and have it export a global containing the proc_dir_entry, and then all of my modules can extern that variable and use it. But then I have to worry about that module getting loaded before all of the others. But maybe that's the way this is usually done? I'm interested in knowing what best practices are.
It is odd. If you really want everything grouped just create a module providing /proc/foo and make everything else depend on it.

Notating Multiple Modules in the Same Package.json File

I am trying to package up some modules that I have been working on. I have five modules, split in to five files. Four of them are the actual outward-facing modules that I want the user to be able to install. The other one is a support module that they all need to function correctly. They are all stored in the same directory. I want to be able to specify each as a separate module in the same directory. But as far as I can tell, one can only define a single module in package.json.
Is there a way to specify multiple modules? If not, that means this must be a bad practice. How should I structure my module's exports to move it in to one main module?
Currently there's not a supported way of having a separate package.json file for each module you'll be publishing within the same directory. And really, this makes sense, as each package you deploy may have issues, feature requests, bugs, etc that need to be handled separately and don't force updates of the others. Separating these out will allow you to focus on the maintenance of each independently, and also allow the consumers of these modules to include them separately. A lot of larger scale projects who have started by creating something they think people will like, end up having the thing that everyone actually use be the random sub-project that was created separately.
So separate directories, and separate package.json files, then include dependencies within the package.json for each. If you haven't already seen there's a couple good writeups to help development of node packages here:
https://docs.npmjs.com/about-packages-and-modules
https://docs.npmjs.com/creating-a-package-json-file
https://docs.npmjs.com/using-npm-packages-in-your-projects

Creating node module half-native half-js

I have made a module for Node.js, which is part native, a C++ compiled library and half-JavaScript - about 10 *.js files. How should I distribute that? As single module or separately?
Depends on the use case I suppose, but as one module is probably fine. Unless you'd like to maintain them in separate repositories, you've got other modules that might prefer depending on them separately, or other otherwise and then it is just a matter of adding one to the other's package.json dependency list.

Using UglifyJs on the whole Node project?

I need to obfuscate my source code as best as possible so I decided to use uglifyjs2.. Now I have the project structure that has nested directories, how can I run it through uglifyjs2 to do the whole project instead of giving it all the input files?
I wouldn't mind if it minified the whole project into a single file or something
I've done something very similar to this in a project I worked on. You have two options:
Leave the files in their directory structure.
This is by far the easier option, but provides a much lower level of obfuscation since someone interested enough in your code basically has a copy of the logical organization of files.
An attacker can simply pretty-print all the files and rename the obfuscated variable names in each file until they have an understanding of what is going on.
To do this, use fs.readdir and fs.stat to recursively go through folders, read in every .js file and output the mangled code.
Compile everything into a single JS file.
This is much more difficult for you to implement, but does make life harder on an attacker since they no longer have the benefit of your project's organization.
Your main problem is reconciling your require calls with files that no longer exist (since everything is now in the same file).
I did this by using Uglify to perform static analysis of my source code by analyzing the AST for calls to require. I then loaded the source code of the required file and repeated.
Once all code was loaded, I replaced the require calls with calls to a custom function, wrapped each file's source code in a function that emulates how node's module system works, and then mangled everything and compiled it into a single file.
My custom require function does most of what node's require does except that rather than searching the disk for a module, it searches the wrapper functions.
Unfortunately, I can't really share any code for #2 since it was part of a proprietary project, but the gist is:
Parse the source text into an AST using UglifyJS.parse.
Use the TreeWalker to visit every node of the AST and check if
node instanceof UglifyJS.AST_Call && node.start.value == 'require'
As I have just completed a huge pure Nodejs project in 80+ files I had the same problem as OP. I needed at least a minimal protection for my hard work, but it seems this very basic need had not been covered by the NPMjs OS community. Add salt to injury the JXCore package encryption system was cracked last week in a few hours so back to obfuscation...
So I created the complete solution, that handles file merging, uglifying. You have the option of leaving out specified files/folders as well from merging. These files are then copied to the new output location of the merged file and references to them are rewritten auto.
NPMjs link of node-uglifier
Github repo of of node-uglifier
PS: I would be glad if people would contribute to make it even better. This is a war between thieves and hard working coders like yourself. Lets join our forces, increase the pain of reverse engineering!
This isn't supported natively by uglifyjs2.
Consider using webpack to package up your entire app into a single minified .js file, excluding node_modules:
http://jlongster.com/Backend-Apps-with-Webpack--Part-I
I had the same need - for which I created node-optimize and grunt-node-optimize.
https://www.npmjs.com/package/grunt-node-optimize

nodejs - what to use instead of require.paths?

Recent node docs say that modifying require.paths is bad practice. What should I do instead?
I believe the concern is that it can be repeatedly modified at run time, rather than just set. That could obviously be confusing and causes some quite bizarre bugs. Also, if individual packages modify the path the results are applied globally, which is really bad and goes against the modular nature of node.
If you have several library paths of your own, the best solution is to set the NODE_PATH environment variable before launching node. Node then picks this up when it's launched and applies it automatically.
I keep the related models in the same dir or a sub dir and load using:
var x = require('./mod/x');
In case it's an external module, I install it using npm that puts the module correctly in NODE_PATH.
I've never changed require.paths.
have a look at https://github.com/patrick-steele-idem/app-module-path-node; you can add a directory to the require statements in the top level, without influencing the paths of sub-modules.
Unless I'm making a mistake in my understanding, the primary limitation of the current system is that for namespacing you're stuck without the uses of folders for non-hierarchical dependencies.
What that means in practice...
Consider that you have x/y/z and a/b as well as a/b/c. If both a/b and a/b/c depend on z/y/z you end up having to either specify that relatively (require('../../x/y/z') and require('../../../x/y/z') respectively) or having to make every single package a node_module. Failing that you can probably do horrific things with symlinks or similar.
As far as I can tell the only alternative is to rather than use folders to namespace and organise, use filenames such as:
a.b.js
a.b.c.js
x.y.z.js

Resources