After git cloning of repository just reuse node modules without download - node.js

I am working on client side application.
We are using the following technologies:
git, node, ember, grunt, sass and other components
Once I cloned the application from git server every time I have to do make tooling to download all the necessary node, sass and bower components and it will take 200MB of data will be downloaded and time consumed.
Is there any solution with out downloading the node modules the app has to run by reusing the Already downloaded modules with out make tooling.

Yes, you can include the node modules in your git repository as well. You'll still have to download them (there's no way around fetching the modules somehow). I'm guessing in your .gitignore you have a line that looks like node_modules. If you remove that line, the modules will be included in your git repository and will be included when you do a git clone.
Be aware that there are a couple of drawbacks to this method:
Git repo size will increase significantly
Modules that have to be compiled likely won't work on other machines, especially different OS's (developing on a Mac, deploying on Linux, for example).

Related

How to import and use a modified npm library packages dynamically

I am using a sigmajs library for creating node based graph visualisations. But the library package had a few bugs, so I modified a few files in the source code of the library and fixed them.
I have hosted my graphs on django server, and whenever I host it, the sigma package in the package.json gets loaded dynamically each time. The static library files on my machine which I had modified and fixed bugs don't get loaded. So,I get the same old package and not the modified one.
How do I access the modified library package dynamically when I host the server.
My advice is that of copying your fixed version of the library on server and install it from local path instead of remote npm repository like this:
npm install --save /path/to/fixed/lib/dir/in/server.
See this answer: npm local install
Pay attention that your fixed lib won't be sync with official one.
I don't know how you modify the library but i suggest to fork the official repository and syncronize your local one with remote one as for example explaind here sync forked repo github.
In this way you can sync to official repo while you mantain your fix and you will install your modified local one. Eventually consider to open issues and PR on sigmajs official repo to apply your fix directly to official library. If they will be accepted you can then install directly official version.

How do you deploy a continuous running app when using npm-cli

I have a largish nodejs based web app, with both server and client components working together. I am currently deploying the app, but using git pull to take my latest production branch from the server repository. A git post-commit hook runs to do a npm install and a rebuild of the servers .env file, and PM2 is monitoring the various processes (3 web servers) using a change in the .env file to restart them.
node_modules is at the highest level of the project with separate server and client subdirectories. Since this is using http2 on a fast lan, I don't bother compressing the client files with web-pack or the like, although I do use rollup on lit-element and lit-html to sort out the import statements (they are not relative or absolute) that they have embedded in them.
I've just been reading that I should really have been doing an npm ci for my node dependencies, but reading the instructions for that it says it blows away the node_modules directory and starts again (whereas npm install doesn't). Since this is all running on a raspberry pi its not instantaneous.
I am not sure a temporary loss of node_modules should effect a running app too much - after all I believe the modules will all have been cached into memory, but it just might have not and there also a possibility that one of the servers falls over and pm2 restarts it, so I am wondering ....
So what is best practice here. Is it possible for instance to copy package.json, package-lock.json to a special build subdirectory, build the node_modules directory there and then move it back into place. Once built. Or is there a better way?

Easy Install of Node js package on without publishing

I am looking for a way to deploy a node js app to multiple machines locally.
Is there some way to create a batch file to zip, or installer file, that will put my node js application and all its dependencies, and possibly get node js too easily on multiple machines by sending one or more files to install?
Also, is there some way to provide updates if the code is updated to all these machines?
Basically, I want to be able to install my node js package/application on multiple locations locally without having to publish my work to npm. Any ideas? cant seem to find anything out there except for putting node js on a web server, or publishing to npm?
This is quite vast. Without using advanced tools these two could work :
git pull origin master
npm install
or a solution with rsync
node js application and all its dependencies
Run an npm install where you're developing your application. Then, just tarball the whole thing, including the node_modules directory. When you deploy your tarball to another machine, be sure to run npm rebuild so that any binary dependencies are built for the platform you just deployed to. If you do your initial npm install on the same platform type, you can usually skip the rebuild step.
Also, is there some way to provide updates if the code is updated to all these machines?
There are an infinite number of ways, and what you pick depends on your needs. You could check-in your whole project including node_modules to version control and just have a Bash script regularly pull from a branch and bounce things as necessary for your specific needs. Beware though that node_modules tends to be huge... it's usually left out of version control. Perhaps stick to the tarball on a server and pull that as necessary.
and possibly get node js too
Keep that separate. You don't need to deploy Node.js every time you deploy your application.

Best way to set up a node.js web project in a closed environment

We build a web application and our project uses various npm packages for development, testing and run-time.
The project is built as part of a large project in TFS. TFS runs ant to build the project. Our build.xml first runs npm install, then transpiles and minifies the TypeScript and Sass files (using Grunt tasks) and then builds the final war fie.
This all works OK, but our TFS is not allowed to access the Internet during the build, only our local network. Therefore, we have all the npm libraries we use copied to a file server in our network, and our package.json dependencies point to paths on that file server.
Does this seems like a reasonable solution?
The problem we have is that the npm install takes about 10 minutes to get all the >50 packages we use (which includes karma, grunt, sass, tslint, etc. – total is 170MB).
We are now looking for way to reduce the TFS build time. One option is to but the node_modules in our source control and skip the npm install step, but is seems wrong to put third-party code in our source control.
I’d love to hear other ideas to handle this and have shorter build time.
Note that on developers machine the project builds in no time, as all packages are already installed, but TFS builds start by getting a clean environment from source control, so nothing is installed.
Tough problem. You could have TFS check if your package.json checksum has changed in order to determine if a "clean" is necessary. You'd still have a 10 minute build whenever package.json is updated, but package.json changes are usually infrequent.
The lines become blurred when you host your own npm libraries since this is essentially taking a snapshot of only the dependencies you need. Therefore, if you added a dependency, colors, you'd have to update your npm repo. That could be viewed as updating the node_modules folder on your npm repo. It's a static list of available dependencies which essentially defeats the purpose of a package.json (unless of course other internal apps use the internal npm repo).
BUT, I digress, I'd argue that the best option is to have a package.json checksum for TFS to know if it should bother rebuilding node_modules.

Using NodeJS, Bower and GulpJS in a project, what should I exclude from my Git repository?

I recently started working on a web project. I'm using:
NodeJS as a server
Bower - to get all the dependencies
GulpJS - for build and other tasks
Git - to save my work
For simplicity, let's say that I'm only writing HTML/CSS/JavaScript.
It doesn't seem to make sense to save ALL the project files in Git including external JavaScript libraries, since I only want to save the core files (the files that I created and modified myself).
On the other hand, if I want to hand over the project to another developer and I only hand him my own files without all the dependencies and libraries, how will he know which dependencies to get? How will he be able to build and run the project?
(I'm still new to bower, gulp and node)
So, what files do I need to save in my repository (the minimum number of files) to be able to build and work on the project?
According to what you said, your .gitignore file should look like this :
node_modules
bower_components
dist
.tmp
On the other hand, if I want to hand over the project to another developer and I only hand him my own files without all the dependencies and libraries, how will he know which dependencies to get?
You don't include your dependencies such as the node_modules and bower_components but the package.json and bower.json are tracking those dependencies so that when a new user makes a clone, he only has to npm install and bower install.
This is, if you took care to use the --save or --save-dev flags when you npm or bower install new packages.
There's a quite active repository on github, containing predefined .gitignore files for different languages, platforms and buildtools.
https://github.com/github/gitignore
Although there's no bower- or gulp-specific configuration there (yet), I usually find it quite useful when trying new things.

Resources