how to install git+node+npm in /home/user? - node.js

I would like to have a self-installation script that i could use to have several installations of git + node.js + npm, all working in isolation in userland (with requiring root access).
One of the goal is to have a continuous integration setup that would
isolate installation of several branches/tags
recompile the bundle from source everytime
I suspect that such a script already exists somewhere but could not find one.
Is there a best-practice somewhere that I am missing ?
Thank you for your help

i don't understand why you'd want to have isolated copies of git, but for node and npm check out this blog post.

Related

Is there a way to determine the host OS and node version based on the node_modules folder?

Is there a more or less easy way to be able to tell (1) which OS (Windows or Linux), and (2) which version of nodejs npm install was run on by looking at the node_modules folder? I tried looking in node_modules/.bin but I see both bash and .cmd files.
Quick back story for those interested: I maintain multiple applications, one of them being a legacy angular app we inherited. This app is extremely "fragile" and sensitive; last time we deployed it, it took the team a couple of days to get the environment "just right". We have multiple chroot environments and VMs, as well as user profiles with different nodejs setups, on both Linux and Windows, which are all fine-tuned for some of these legacy applications we manage.
The problem with this specific app is that I need to make a small change to it and recompile it, and the person managing it is not currently accessible. I am extremely worried about "disrupting" anything by touching the node_modules folder, since I know it took the team a long time and much trial & error to get everything just right (including package versions, a lot of which were tried and installed manually, etc). So after I make the change I need to ensure I'm calling everything from the same exact environment.
Use OS module
var os = require('os');
os.platform(); // 'darwin'
os.release(); //'10.8.0'
For node version,
console.log('Version: ' + process.version);

How to speed up the build of front-end monorepo projects?

Here are some of my current thoughts, but would also like to understand how the community is doing it now
Use caching to reduce build time
Motivation
Currently, every time you change something in the libs, someone else needs to initialize it again when pulling it through git. If you know which package it's in, you can just run the initialize command for the specified package. If you don't know, you need to run the root initialize command, which is actually very slow because it re-runs all npm packages that contain the intialize command, regardless of whether they have changed or not.
reduce initialize time and improve collaborative development experience
support for ci/cd caching of built libs to speed up build time
Current state
lerna run
First initialization of new project 198.54s
Re-initialize for non-new projects 90.17s
Requirements
Execute commands as concurrently as possible according to the dependency graph, and implement caching based on git changes
Execute commands in modules that specify module dependencies
Execute commands in all submodules
Implementation ideas
lerna api is currently severely undocumented: https://github.com/lerna/lerna/issues/2013
scan lerna.json for all modules
run the specified commands in parallel as much as possible according to the dependencies
check git when running the command to see if it needs to be skipped
Schematic: https://photos.app.goo.gl/aMyaewgJxkQMhe1q6
The ultra-runner already exists, so maybe it's better to support this feature based on it
There seems to be no support for running commands other than build at the moment, see: https://github.com/folke/ultra-runner/issues/224
Known solutions
Trying to use Microsoft rush -- hoping for a better solution without replacing the main framework lerna+yarn

Node packages error after migrating project to different folder

Using Angular with an ASP.NET MVC project and ive moved the codebase to another path on my hard drive. When I build, I get errors complaining about not being able to find packages. I dont think this is so much of an Angular issue (using System.js module loader), but rather a Node issue related to finding packages.
The fix so far has been to simply delete everything in node_modules and get them again. Is there a way to avoid having to do this? Otherwise, if I check my code into our source control system and someone else pulls it down, they will run into this issue as well.
[update]
When I am making a copy of the project, it includes the node_modules as well. I intend to check in these into source control as well, so that we can control when packages get updated and the dependency issues that might be caused.
[update 2]
Well I think I need to go back and review what Im doing. I never liked the idea of keeping node_modules in source to begin with and if I can find a way to manage "breaking changes" due to package updates, then I can forego the mess and bloat from keeping node_modules in my source control system.
https://www.sitepoint.com/beginners-guide-node-package-manager/
When someone else pull the code or when you move your code in other folder YOU HAVE TO RUN A NPM NISTALL command .. so it can download the packages ..
if you have already downloded them ..don't worry to do a NPM INSTALL cause it takes them from the cache ....
for other people they need first time to download them for first time
Hope it helps you!!

When to add a dependency? Are there cases where I should rather copy the functionality?

I lately helped out on a project, where I added a really small dependency - in fact, it only contained a regular expression (https://www.npmjs.com/package/is-unc-path).
The feedback I got from the developer of the project was that he tries to minimize third-party dependencies if they can be implemented easily - whereby he - if I understand it correctly - asks me to just copy the code instead of adding another dependency.
To me, adding a new dependency looks just like putting some lines of code into an extra file in the repo. In addition, the developers will get informed by an update if the code needs a change.
Is it just a religious thought that drives a developer to do this? Are there maybe any costs (performance- or space-wise, etc) when adding a dependency?
I also had some disputes with my managers once concerning the third party libraries, the problem was even greater he got into believing that you should version the node_modules folder.
The source of any conflict usually is the ignorance.
His arguments were:
you should deliver to the client a working product not needing for him to do any other jobs like npm install
if github, npm is down in the moment when you run npm install on the server what you will do ?
if the library that you install has a bug who will be responsible
My arguments were:
versioning node_modules is not going to work due to how package dependencies work, each library will download his own node_modules dependencies and then your git repository will grow rapidly to hundreds of mb. Deploy will become more and more slow, downloading each time half a gb of code take time. NPM does use a module caching mechanism if there are no changes it will not download code uselessly.
the problem with left-pad was painfull but after that npm implemented a locking system and now for each package you just lock to a specific commit hash.
And Github, and npm does not have just a single instance service, they run in cloud.
When installing a dependency you always have some ideas behind and there are community best practices, usually they resume to: 1. Does the repo has unit tests. 2. The download number 3. When was the latest update.
Node.js ecosystem is built on modularity, it is not that node is so popular cause of some luck, but cause of how it was designed to create modules and reuse them. Sometimes working in node.js environment feels like putting lego pieces together and building your toy. This is the main cause of super fast development in node.js. People just reuse stuff.
Finally he stayed on his own ideas, and I left the project :D.

Including local dependencies in deployment to lambda

I have a repo which consists of several "micro-services" which I upload to AWS's Lambda. In addition I have a few shared libraries that I'd like to package up when sending to AWS.
Therefore my directory structure looks like:
/micro-service-1
/dist
package.json
index.js
/micro-service-2
/dist
package.json
index.js
/shared-component-1
/dist
package.json
component-name-1.js
/shared-component-2
/dist
package.json
component-name-2.js
The basic deployment leverages the handy node-lambda npm module but when I reference a local shared component with a statement like:
var sharedService = require('../../shared-component-1/dist/index');
This works just fine with the node-lambda run command but node-lambda deploy drops this local dependency. Probably makes sense because I'm going below the "root" directory in my dependency so I thought maybe I'd leverage gulp to make this work but I'm pretty darn new to it so I may be doing something dumb. My strategy was to:
Have gulp deploy depend on a local-deps task
the local-deps task would:
npm build --production to a directory
then pipe this directory over to the micro-service under the /local directory
clean up the install in the shared
I would then refer to all shared components like so:
var sharedService = require('local/component-name-1');
Hopefully this makes what I'm trying to achieve. Does this strategy make sense? Is there a simpler way I should be considering? Does anyone have any examples of anything like this in "gulp speak"?
I have an answer to this! :D
TL;DR - Use npm link to link create a symbolic link between your common component and the dependent component.
So, I have a a project with only two modules:
- main-module
- referenced-module
Each of these is a node module. If I cd into referenced-module and run npm link, then cd into main-module and npm link referenced-module, npm will 'install' my referenced-module into my main-module and store it in my node_modules folder. NOTE: When running the second npm link, the name of the project is the one you find in your package.json, not the name of the directory (see npm link documentation, previously linked).
Now, in my main-module all I need to do is var test = require('referenced-module') and I can use that to my hearts content. Be sure to module.exports your code from your referenced-module!
Now, when you zip up main-module to deploy it to AWS Lambda, the links are resolved and the real modules are put in their place! I've tested this and it works, though not with node-lambda yet, though I don't see why this should be a problem (unless it does something different with the package restores).
What's nice about this approach as well is that any changes I make to my referenced-module are automatically picked up by my main-module during development, so I don't have to run any gulp tasks or anything to sync them.
I find this is quite a nice, clean solution and I was able to get it working within a few minutes. If anything I've described above doesn't make any sense (as I've only just discovered this solution myself!), please leave a comment and I'll try and clarify for you.
UPDATE FEB 2016
Depending on your requirements and how large your application is, there may be an interesting alternative that solves this problem even more elegantly than using symlinking. Take a look at Serverless. It's quite a neat way of structuring serverless applications and includes useful features like being able to assign API Gateway endpoints that trigger the Lambda function you are writing. It even allows you to script CloudFormation configurations, so if you have other resources to deploy then you could do so here. Need a 'beta' or 'prod' stage? This can do it for you too. I've been using it for just over a week and while there is a bit of setup to do and things aren't always as clear as you'd like, it is quite flexible and the support community is good!
While using serverless we faced a similar issue, when having the need to share code between AWS Lambdas. Initially we used to duplication the code, across each microservice, but later as always it became difficult to manage.
Since the development done in Windows Environment, using symbolic links was not an option for us.
Then we came up with a solution to use a shared folder to keep the local dependencies and use a custom written gulp task to copy these dependencies across each of the microservice endpoints so that the dependency can be required similar to npm package.
One of the decisions we made is not to keep two places to define the dependencies for microservices, so we used the same package.json to define the local shared dependencies, where gulp task passes this file and copy the shared dependencies accordingly also installing the npm dependencies with a single command.
Later we made the code open source as npm modules serverless-dependency-install and gulp-dependency-install.

Resources