create scafolding generators like react/open-wc using node - node.js

I am trying to create a project structure to my team like how it implemented by open-wc or create-react-app
just say npm init #open-wc and it asks couple of questions and creates the folder with specified configurations.
I didn't find good articles on google except exploring the github projects.

Maintainer of open-wc here :)
So to get an npm init script all you need to do is define a bin in your package.json.
This is what we use for npm init #open-wc:
"name": "#open-wc/create",
"bin": {
"create-open-wc": "./dist/create.js"
},
So then for the name you have 2 choices:
create-foo will be available via npm init foo
#foo/create will be available via npm init #foo
The generators itself
That's a rather sad story... we looked around but we didn't find anything that really fit our use case well. There is http://yeoman.io/ which we used initially but it's huge and it meant we had a bootup time of ~30-40 seconds before the menu appeared. We felt we needed to do something so now we roll our own solution.
It covers what we need now with a fraction of the size (especially by being very careful with dependencies) which reduced our bootup time to about ~5-10 seconds.
We thought about promoting it as a separate standalone project but truth be told we don't have the manpower for it. It's just 4 files you can find here https://github.com/open-wc/open-wc/tree/master/packages/create/src - beware as there is no docu and quite some rough edges.
Still, if you don't find a better solution feel free to join us and with some help, we could make it a separate product.

Related

node_modules polluting my codebase

Hello I'm new to Node and in particular the dependency management system. When installing a module I find that my codebase is being covered by lots of dependencies when the actual code I've written is not that lengthy. I've also noticed that sometimes when I do an npm install instead of packaging all of the dependencies under one folder representing the module I'm downloading sometimes the modules dependencies endup sitting in parallel polluting the main folder. For example, I created a module that had maybe 3 sub modules that all are used by the main module and fit well together.
index.js
node_modules
my_authentication_module
my_authorization_module
my_persistance_module
Then when I installed an AWS dependency the number of modules jumped considerable so my code base looks like
index.js
node_modules
my_authentication_module
my_authorization_module
my_persistance_module
aws_module_1
aws_module_2
.
.
.
.
.
aws_module_20
Problem
This is cluttering my code and making it look like theres a lot more going on than there is. Is there a more efficient way of managing a node project?
Secondary Question
How come running "npm install some-module --save" does not confine all of the modules dependencies to a single folder? Or is there a way of doing this so that if some package needs 50 packages I don't end up with 50 packages sitting in parallel with the package that needs them.
For example. Instead of:
node_modules
my_authentication_module
my_authorization_module
my_persistance_module
aws_module_1
aws_module_2
.
.
.
.
.
aws_module_20
It would be nice to get
node_modules
my_authentication_module
my_authorization_module
my_persistance_module
aws
node_modules
aws_module_1
aws_module_2
.
.
.
.
.
aws_module_20
So at least navigating to the top level you can easily see theres only really 3 modules of interest with a bunch of AWS dependencies crammed neatly into one folder. Is anything like this possible?
You seem to be misinterpreting the purpose of node_modules. It is exclusively the province of npm (or yarn etc.). It will never include your own code (except perhaps other separate packages of yours, which are brought in as dependencies). It is (usually) not version-controlled; that is, it is .gitignored'd. At any point, it should be able to erased entirely and repopulated with a simple npm install. Of course, as with anything, there are more nuances and divergent opinions here that are discussed in detail across the web.
There are many ways to manage and structure your own code and artifacts. In many cases, it will go under a src or possibly lib directory at the top of the project, parallel to node_modules. Within src, some people prefer to group code by function (eg, controllers, routers, services), while others prefer to group by area of concern--that's a matter for the project owner to decide.
In any case, since node_modules is fundamentally not your direct concern, it is of no real consequence whether npm organizes the dependencies in hierarchical fashion, as was the case with npm#2, or in flat fashion, as with npm#3. Yes, it can be a bit disconcerting to see a node_modules with 100 entries or 500, but that's really a matter for npm to worry about, and there are good reasons for the change made with npm#3.

how to auto install npm packages on webpack

I want to automate some tasks that I do all the time and
so I have two questions:
can I have all my npm packages point to a cache location (in package.json)
so I don't have to download if I already have, while I develop (also to save space), but change it
back to component names when I deploy?
I wanted to do something with webpack where if you type 'require' in the
js files, it'll automatically "install" (with #1 method) the packages.
Can these be done? I want to automate lots of stuff that I'm doing and feel like this is a good starting point. Thanks in advance.
So, basically:
As Mike 'Pomax' Kamermans and Louy said, you don't need to care about the 're-downloads' of npm, since it already cares about it behind the scenes.
I stumbled upon the auto-install-webpack-plugin today while looking for something similar and it seems to be what you're describing. I haven't tested it yet, though.
I can edit this answer after testing it, but let me know if you do it first.

Is there a generic way to consume my dependency's grunt build process?

Let's say I have a project where I want to use Lo-Dash and jQuery, but I don't need all of the features.
Sure, both these projects have build tools so I can compile exactly the versions I need to save valuable bandwidth and parsing time, but I think it's quite uncomfortable and ugly to install both of them locally, generate my versions and then check them it into my repository.
Much rather I'd like to integrate their grunt process into my own and create custom builds on the go, which would be much more maintainable.
The Lo-Dash team offers this functionality with a dedicated cli and even wraps it with a grunt task. That's very nice indeed, but I want a generic solution for this problem, as it shouldn't be necessary to have every package author replicate this.
I tried to achieve this somehow with grunt-shell hackery, but as far as I know it's not possible to devDependencies more than one level deep, which makes it impossible even more ugly to execute the required grunt tasks.
So what's your take on this, or should I just move this over to the 0.5.0 discussion of grunt?
What you ask assumes that the package has:
A dependency on Grunt to build a distribution; most popular libraries have this, but some of the less common ones may still use shell scripts or the npm run command for general minification/compression.
Some way of generating a custom build in the first place with a dedicated tool like Modernizr or Lo-Dash has.
You could perhaps substitute number 2 with a generic one that parses both your source code and the library code and uses code coverage to eliminate unnecessary functions from the library. This is already being developed (see goldmine), however I can't make any claims about how good that is because I haven't used it.
Also, I'm not sure how that would work in a AMD context where there are a lot of interconnected dependencies; ideally you'd be able to run the r.js optimiser and get an almond build for production, and then filter that for unnecessary functions (most likely Istanbul, would then have to make sure that the filtered script passed all your unit/integration tests). Not sure how that would end up looking but it'd be pretty cool if that could happen. :-)
However, there is a task especially for running Grunt tasks from 'sub-gruntfiles' that you might like to have a look at: grunt-subgrunt.

How do I choose a node module?

There are dozens of modules available out there, many fulfilling the same task. For instance, the list of router modules alone contains 26 modules.
Given a list of modules, how can I pick the best for my needs? I am looking for one that is maintained, tested, and with some inertia, but I'm not sure how to figure out which of these modules fit that criteria.
This answer is based on a talk a few weeks ago in San Francisco by Isaac Schlueter (npm author, took over node.js responsibilities from Ryan Dahl, works at Joyent - https://twitter.com/izs )
Isaac's main project now is to improve the NPM to help people figure out the quality of packages.
Before efore the npmjs.org website gets smarter, here are factors to consider (some already listed by #3boll )
Factors
Number of downloads
How recently updated
History of updates (has it bin updated often over a long period of time)
Number of contributors
Have well-known/trusted developers and maintainers starred it? [a]
Do other important packages depend on it? [b]
Is the package well-documented and have it's own website?
Does the module have test coverage?
Github factors:
updated: As of npm 1.2.20 and forward, modules without repository fields will show a missing repository fields warning. (Nice touch to put a little pressure on people to package up their modules correctly.)
Number of forks
Number of commits
Are issues being closed on github, or have the same issues been open for a long time?
[a] example of starred
https://npmjs.org/~tjholowaychuk
[b] to quickly see from terminal:
npm view <name_of_module> dependencies
example:
npm view connect dependencies
https://npmjs.org/browse/depended
CAVEATS:
Popular doesn't mean being good. There are a lot of modules that are not popular that are really good.
Inaccurate "last updated". NPM may show a the module has been updated 2 years ago, but the github has been updated in last week. This happens if maintainer doesn't update version number as code changes on github.
This module(middleware layer for Node.js) meets your requirements:
connect — Robust high performance middleware framework
Forks about 500
+1000 commits
last update: 7 days ago.
569 npm packages depends on these module https://npmjs.org/browse/depended
p.s.
I have nothing to do with the development of the module, is just my recommendation.

Memcached on NodeJS - node-memcached or node-memcache, which one is more stable?

I need to implement a memory cache with Node, it looks like there are currently two packages available for doing this:
node-memcached (https://github.com/3rd-Eden/node-memcached)
node-memcache (https://github.com/vanillahsu/node-memcache)
Looking at both Github pages it looks like both projects are under active development with similar features.
Can anyone recommend one over the other? Does anyone know which one is more stable?
At the moment of writing this, the project 3rd-Eden/node-memcached doesn't seem to be stable, according to github issue list. (e.g. see issue #46) Moreover I found it's code quite hard to read (and thus hard to update), so I wouldn't suggest using it in your projects.
The second project, elbart/node-memcache, seems to work fine , and I feel good about the way it's source code is written. So If I were to choose between only this two options, I would prefer using the elbart/node-memcache.
But as of now, both projects suffer from the problem of storing BLOBs. There's an opened issue for the 3rd-Eden/node-memcached project, and the elbart/node-memcache simply doesn't support the option. (it would be fair to add that there's a fork of the project that is said to add option of storing BLOBs, but I haven't tried it)
So if you need to store BLOBs (e.g. images) in memcached, I suggest using overclocked/mc module. I'm using it now in my project and have no problems with it. It has nice documentation, it's highly-customizable, but still easy-to-use. And at the moment it seems to be the only module that works fine with BLOBs storing and retrieving.
Since this is an old question/answer (2 years ago), and I got here by googling and then researching, I feel that I should tell readers that I definitely think 3rd-eden's memcached package is the one to go with. It seems to work fine, and based on the usage by others and recent updates, it is the clear winner. Almost 20K downloads for the month, 1300 just today, last update was made 21 hours ago. No other memcache package even comes close. https://npmjs.org/package/memcached
The best way I know of to see which modules are the most robust is to look at how many projects depend on them. You can find this on npmjs.org's search page. For example:
memcache has 3 dependent projects
memcached has 31 dependent projects
... and in the latter, I see connect-memcached, which would seem to lend some credibility there. Thus, I'd go with the latter barring any other input or recommenations.

Resources