How to create your own npm starter-kit? - node.js

Starte of the art:
Popular node.js frameworks come with their own starter-kits that act as project templates, such as React comes with the starter-kit .
When user want to create a new react application, they can use the short hand npx create-react-app my-app. ... to automagically create the whole project scaffold with all baseline project files, directories, and package dependencies.
When using the starter-kit, npx resolve template variables inside the starter-kit files i.e. if package.json contains an entry {{my-application-name}}, npx will as for user input to replace the template variable with e.g. "my-first-app".
React is just one example for such a starter-kit.
Problem statement:
I would like to build my own starter-kit for my own node.js framework.
At the first glance, it seems that one just follow the convention to name my package starting with "create-" and use the template-{{vars}} at the right places.
I gave it a try and started copy&paste&replace, however npm install did not work, since unresolved template variable result in build errors.
Does somebody know a tutorial how to create your own npx starter-kit?
Do somebody know how one can build and publish a new starter kit?

yes, you can build your own npx starter by using the CLI helper packages on the npm registery
Steps for how to build your own npx starter...?
you can use args or commander to get the cli
inputs ( node argv withing the process object ) and parse it
into object so you can keep track of what the user want
based on the the previous parsed object from the CLI now you can
gather more information by using inquirer where
you can ask yes/no questions or choose one of many options.
after the user answer all the questions you now can write the logic
for creating the required starter based on the inputs of the user
you can use execa to perform process command such as git
init , git clone or any command that might comes to your own mind
you can use listr to prettify your own task lists with a
beutiful spinner
you can use chalk to print the line in the CLI with colors
and make it easier to read make the message green if it's success or red when it's failed
for more explanation on how it works together you can see this video

Related

Can you run an outside CLI command inside of a CLI you're developing?

I'm developing a CLI that needs to generate angular components as well as
creating other files in a separate module. So, the project that the CLI will assist in would ideally call ng generate to handle the angular side of things, and then use fs-extra to go into the other module and add files. It's for a CMS SPA editor where the angular code and CMS code live in the same project.
So, am I able to run a different CLI's commands inside of my node CLI that I'm developing, or do I need to manually create all the files that ng generate is going to create along with the other files I'm creating at the same time?
The Angular Schematics are the smarts behind the Angular CLI. You can use the schematics to build your own CLI leveraging the features of Angular Schematics. That way you will still get all of the ng generate functionality.
See these articles for more information on the CLI schematics:
https://brianflove.com/2018/12/11/angular-schematics-tutorial/
https://github.com/angular/angular-cli

Updating angular code dynamically when conditions are met in nodejs code

I was looking for something that can update the code automatically on certain conditions. It is similar to how the new dependencies are added to the package.json file, when we do an npm install.
How to make it possible ?
Explaining Further - When I run a command (in node), I have some code in some other folder (Angular)- the command will update the code like for example inserting new imports.
If you want to use Angular CLI +1.7 and update all dependency package you can use ng upadte
running the ‘update’ command will check for changes in the auto
generated files that come with angular projects out-of-the-box and you
be ask if you want to overwrite the existing with the new. You will
also be prompted to choose from four options, y(yes), n(no),
d(difference) and h(help). Choose with discretion.

Yeoman Sharepoint Client-side Solution Generator stops taking input

I'm trying to install Sharepoint Client-side Solution using Yeoman but running into difficulties.
I can't get passed the 4th question, it doesn't accept any input so I can't answer it.
Versions
node v8.1.0
yo v2.0.0
gulp v3.9.1
microsoft/generator-sharepoint v1.3.2
Not really sure what is the issue here as it looks to be specific to your machine. I am able to get past the questions and create a solution.
However, as a workaround for your issue, you can try the below steps:
Paste the below command to see the list of command line options available for the SharePoint generator
yo #microsoft/generator-sharepoint --help
It will show up as below:
Based on that, you can create the command string and generate the project as below:
Assume that you want to create an SPFx webpart with React framework, then your command would be as below:
yo #microsoft/sharepoint --solutionName "hello-world" --framework "react" --componentType "webpart" --componentName "HelloWorld" --componentDescription "HelloWorld web part" --environment "spo" skipFeatureDeployment false
Similarly, you can create/modify the above command as per your requirements.
Reference - Scaffold projects using yeoman SharePoint generator
Also, ensure that you have enough disk space for those node modules. It will take up approx 300 MB space.

Can I turn code into an NPM module without extracting it from a project into its own repo?

Project A contains a few functions and data models I use in diff't repos, all tied to the same product. I'd like to turn them into an npm module, but without extracting the code from project A.
When I see other modules on npm, they generally tie to a github repo that contains all the source code, as well as a full stack to run/modify the module.
Does this mean I have to extract the code from project A into its own repository, build/configure a stack to allow it to run in isolation from project A, and then import it back into project A & other projects?
Or is it possible to just export the functions w/o a full stack, and without moving the code from my main project?
an attempt to pre-empt 'duplicate' comments:
this Q talks about working with an existing module, which doesn't answer my concern, as it has to do w/ worrying about pull requests being merged on time
npm link, discussed here, looks like it'd do the trick if I'd already extracted the code from the project, but I'd like to avoid that.
If you really want to share a snippet through npm but still use the code at the same place in your project, you could extract the code into its own repo, but still use it inside your project as a git sub-module.
Create a submodule repository from a folder and keep its git commit history
Do you know if it's standard for npm modules in their own repos to include the full stack for running them?
Ideally, it's to test them and ease the development, but it's totally optional. You could only put a JavaScript file and the package.json and it would work.

Including local dependencies in deployment to lambda

I have a repo which consists of several "micro-services" which I upload to AWS's Lambda. In addition I have a few shared libraries that I'd like to package up when sending to AWS.
Therefore my directory structure looks like:
/micro-service-1
/dist
package.json
index.js
/micro-service-2
/dist
package.json
index.js
/shared-component-1
/dist
package.json
component-name-1.js
/shared-component-2
/dist
package.json
component-name-2.js
The basic deployment leverages the handy node-lambda npm module but when I reference a local shared component with a statement like:
var sharedService = require('../../shared-component-1/dist/index');
This works just fine with the node-lambda run command but node-lambda deploy drops this local dependency. Probably makes sense because I'm going below the "root" directory in my dependency so I thought maybe I'd leverage gulp to make this work but I'm pretty darn new to it so I may be doing something dumb. My strategy was to:
Have gulp deploy depend on a local-deps task
the local-deps task would:
npm build --production to a directory
then pipe this directory over to the micro-service under the /local directory
clean up the install in the shared
I would then refer to all shared components like so:
var sharedService = require('local/component-name-1');
Hopefully this makes what I'm trying to achieve. Does this strategy make sense? Is there a simpler way I should be considering? Does anyone have any examples of anything like this in "gulp speak"?
I have an answer to this! :D
TL;DR - Use npm link to link create a symbolic link between your common component and the dependent component.
So, I have a a project with only two modules:
- main-module
- referenced-module
Each of these is a node module. If I cd into referenced-module and run npm link, then cd into main-module and npm link referenced-module, npm will 'install' my referenced-module into my main-module and store it in my node_modules folder. NOTE: When running the second npm link, the name of the project is the one you find in your package.json, not the name of the directory (see npm link documentation, previously linked).
Now, in my main-module all I need to do is var test = require('referenced-module') and I can use that to my hearts content. Be sure to module.exports your code from your referenced-module!
Now, when you zip up main-module to deploy it to AWS Lambda, the links are resolved and the real modules are put in their place! I've tested this and it works, though not with node-lambda yet, though I don't see why this should be a problem (unless it does something different with the package restores).
What's nice about this approach as well is that any changes I make to my referenced-module are automatically picked up by my main-module during development, so I don't have to run any gulp tasks or anything to sync them.
I find this is quite a nice, clean solution and I was able to get it working within a few minutes. If anything I've described above doesn't make any sense (as I've only just discovered this solution myself!), please leave a comment and I'll try and clarify for you.
UPDATE FEB 2016
Depending on your requirements and how large your application is, there may be an interesting alternative that solves this problem even more elegantly than using symlinking. Take a look at Serverless. It's quite a neat way of structuring serverless applications and includes useful features like being able to assign API Gateway endpoints that trigger the Lambda function you are writing. It even allows you to script CloudFormation configurations, so if you have other resources to deploy then you could do so here. Need a 'beta' or 'prod' stage? This can do it for you too. I've been using it for just over a week and while there is a bit of setup to do and things aren't always as clear as you'd like, it is quite flexible and the support community is good!
While using serverless we faced a similar issue, when having the need to share code between AWS Lambdas. Initially we used to duplication the code, across each microservice, but later as always it became difficult to manage.
Since the development done in Windows Environment, using symbolic links was not an option for us.
Then we came up with a solution to use a shared folder to keep the local dependencies and use a custom written gulp task to copy these dependencies across each of the microservice endpoints so that the dependency can be required similar to npm package.
One of the decisions we made is not to keep two places to define the dependencies for microservices, so we used the same package.json to define the local shared dependencies, where gulp task passes this file and copy the shared dependencies accordingly also installing the npm dependencies with a single command.
Later we made the code open source as npm modules serverless-dependency-install and gulp-dependency-install.

Resources