I want to build my Node.JS application in a Azure Website.
There will be an usage of different NPM packages via my packages.json file.
My problem is that I often receive error messages which are related to missing NPM files.
Normally I put my files via FTP or edit them per VS Studio 15 Azure plugin directly on the server. This may be the reason why NPM isn't triggering as Microsoft intended it.
I would prefer a way in which I can just run commands with elevated privileges to have full control over NPM by myself.
Which ways are possible to avaid these problems?
If you're publishing your nodeJS application 'manually' via FTP there are little concerns about that.
First of All, 'manually' means manually.
Git
If you use continuous deployment via Git the final deployment step is to call npm install in your current application folder, this will install all the packages listed in package.json file.
The node_modules folder is excluded by default in .gitignore file, so all packages are downloaded by the server
Web deployment
If you're using web deployment from visual studio or command line, all the files contained by your solution are copied to Hosting environment including node_modules folder , because of this the deployment would take a long time to finish due the huge amount of dependencies and files that the folder contains.
Even worst: this scenario could take you to the same scenario you're facing right now.
FTP deployment
You're copying everything yourself. So the same thing occurs in Web Deployment is happen in FTP deployment method.
--
The thing is that when you copy all those node_modules folder contents you're assuming that those dependencies remains the same in the target enviroment, most of the cases that's true, but not always.
Some dependencies are platform dependent so maybe in you're dev environment a dependency works ok in x86 architectures but what if your target machine or website (or some mix between them) is x64 (real case I already suffer it).
Other related issues could happen. May be your direct dependencies doesn't have the problem but the linked dependencies to them could have it.
So always is strongly recommended to run npm install in your target environment and avoid to copy the dependencies directly from your dev environment.
In that way you need to copy on your target environment the folder structure excluding node_modules folder. And then when files are copied you need to run npm install on the server.
To achieve that you could go to
yoursitename.scm.azurewebsites.net
There you can goto "Debug Console" Tab, then goto this directory D:\home\site\wwwroot> and run
npm install
After that the packages and dependencies are downloaded for the server/website architecture.
Hope this helps.
Azure tweak the Kudu output settings, in local Kudu implementations looks the output is normalized.
A workaround -non perfect- could be this
npm install --dd
Or even more detailed
npm install --ddd
The most related answer from Microsoft itself is this
Using Node.js Modules with Azure applications
Regarding control via a console with elevated privileges there is the way of using the Kudu console. But the error output is quite weird. It's kind of putting blindly commands in the console without much feedback.
Maybe this is a way to go. But I didn't tried this yet.
Regarding deployment it looks like that Azure wants you to prefer Continuous Deployment.
The suggested way is this here.
Related
Trying to develop node.js application since it is under development requires lot of deployments to be done to production environment.During every releases do we have to move entire node_modules to production ?
Note : production environment is restricted from internet access so cannot use NPM install there.
You should check out npmrc file, which, among other things, decides source of all your node modules.
Your production server should have such file in its project root, and it must store the location of your npm packages that must be installed.
The registry value inside npmrc file is where you can put your packages. You can read more about it here.
What is the best practice for deploying a nodejs application?
1) Directly moving the node_modules folders from the development server to production server, so that our same local environment can be created in the production also. Whatever changes made to any of the node modules remotely will not affect our code.
2) Run npm install command in the production server with the help of package.json. Here the problem is, any changes in the node modules will affect our code. I have faced some issues with the loopback module (issue link).
Can anyone help me?
Running npm install in production server cannot be done in certain scenario (lack of compiling tools, restricted internet access, etc...) and also if you have to deploy the same project on multiple machines, can be a waste of cpu, memory and bandwidth.
You should run npm install --production on a machine with the same libraries and node version of the production server, compress node_modules and deploy on production server. You should also keep the package-lock.json file to pinpoint versions.
This approach allows you also to build/test your code using development packages and then pruning the node_modules before the actual deploy.
Moving node_modules folder is overkilled.
Running npm install might break the version dependencies.
The best approach is npm ci. It uses the package_lock file and installs the required dependencies without modify the versions.
npm ci meant for continuous integration projects. LINK
I am an ASP.NET Core developer but I recently started working with Node.js apps. For me this was one of the challenges you mentioned to move the node_modules folder to production. Instead of moving the whole folder to production or only running the npm install command on production server, I figured out and tried a way of bundling my Node.js app using Webpack into a single/multiple bundles, and I just got rid of the mess of managing node_modules folder. It only picks up the required node_modules packages that are being used/referred in my app and bundles up in a single file along with my app code and I deploy that single file to production without moving the entire node_modules folder.
I found this approach useful in my case but please suggest me if this is not the correct way regarding the performance of the app or if any cons of this approach.
Definitely npm install. But you shouldn't do this by your own hand when it comes to deploying your app.
Use the tool for this like PM2.
As for your concern about changes in packages, the short answer is package-lock.json.
My guess is that by asking this question you don't really understand the point of the package.json file.
The package.json file is explicitly intended for this purpose (that, and uploading to the npm registry), the transfer of a node package without having to transfer the sizeable number of dependencies along with it.
I would go as far as to say that one should never manually move the node_modules directory at all.
Definitely use the npm install command on your production server, this is the proper way of doing it. To avoid any changes to the node_modules directory as compared to your local environment, use the package lock file. That should help with minimising changes to the source code in node_modules.
I mean no bad intent by saying this
I am looking for a way to deploy a node js app to multiple machines locally.
Is there some way to create a batch file to zip, or installer file, that will put my node js application and all its dependencies, and possibly get node js too easily on multiple machines by sending one or more files to install?
Also, is there some way to provide updates if the code is updated to all these machines?
Basically, I want to be able to install my node js package/application on multiple locations locally without having to publish my work to npm. Any ideas? cant seem to find anything out there except for putting node js on a web server, or publishing to npm?
This is quite vast. Without using advanced tools these two could work :
git pull origin master
npm install
or a solution with rsync
node js application and all its dependencies
Run an npm install where you're developing your application. Then, just tarball the whole thing, including the node_modules directory. When you deploy your tarball to another machine, be sure to run npm rebuild so that any binary dependencies are built for the platform you just deployed to. If you do your initial npm install on the same platform type, you can usually skip the rebuild step.
Also, is there some way to provide updates if the code is updated to all these machines?
There are an infinite number of ways, and what you pick depends on your needs. You could check-in your whole project including node_modules to version control and just have a Bash script regularly pull from a branch and bounce things as necessary for your specific needs. Beware though that node_modules tends to be huge... it's usually left out of version control. Perhaps stick to the tarball on a server and pull that as necessary.
and possibly get node js too
Keep that separate. You don't need to deploy Node.js every time you deploy your application.
We build a web application and our project uses various npm packages for development, testing and run-time.
The project is built as part of a large project in TFS. TFS runs ant to build the project. Our build.xml first runs npm install, then transpiles and minifies the TypeScript and Sass files (using Grunt tasks) and then builds the final war fie.
This all works OK, but our TFS is not allowed to access the Internet during the build, only our local network. Therefore, we have all the npm libraries we use copied to a file server in our network, and our package.json dependencies point to paths on that file server.
Does this seems like a reasonable solution?
The problem we have is that the npm install takes about 10 minutes to get all the >50 packages we use (which includes karma, grunt, sass, tslint, etc. – total is 170MB).
We are now looking for way to reduce the TFS build time. One option is to but the node_modules in our source control and skip the npm install step, but is seems wrong to put third-party code in our source control.
I’d love to hear other ideas to handle this and have shorter build time.
Note that on developers machine the project builds in no time, as all packages are already installed, but TFS builds start by getting a clean environment from source control, so nothing is installed.
Tough problem. You could have TFS check if your package.json checksum has changed in order to determine if a "clean" is necessary. You'd still have a 10 minute build whenever package.json is updated, but package.json changes are usually infrequent.
The lines become blurred when you host your own npm libraries since this is essentially taking a snapshot of only the dependencies you need. Therefore, if you added a dependency, colors, you'd have to update your npm repo. That could be viewed as updating the node_modules folder on your npm repo. It's a static list of available dependencies which essentially defeats the purpose of a package.json (unless of course other internal apps use the internal npm repo).
BUT, I digress, I'd argue that the best option is to have a package.json checksum for TFS to know if it should bother rebuilding node_modules.
I have a very special requirement from my client. We have been using npm to install karma and phantomjs for quite a while. Everything works fine until we have to move everything off the cloud to internal infrastructure. Now things get complicated. The internal infrastructure doesn't have internet access so we cannot use npm to resolve dependencies anymore. We tried to move node_modules folder dev machine to the internal infrastructure machine. It didn't work because dev machine is OSX and Windows and the server is Centos and phantomjs is OS specific but npm is able to workout the versioning. What options do we have to resolve dependencies? I just learn that node_modules name cannot be changed. I was thinking of checking in OS specific node_modules but that wouldn't work since npm only looks for node_modules folder.
I got the same error as this thread PhantomJS Crash - Exit Code 126 when I was trying to use node_modules from OSX in Centos.
Install all dependencies on first OS (i.e. OSX), assuming that you have package.json with all dependencies.
npm install
Rename created npm_modules to npm_modules_mac
Repeat steps above for different OS (i.e. Windows), rename node_modules to something like node_modules_windows.
On target OS, move folders created above to your app folder, create symbolic link (node_modules), which will point to appropriate folder (npm_modules -> npm_modules_mac in OSX)
Why don't you just host your private registry? You can store the registry in the internal infrastructure.
The defacto registry is #isaacs own npmjs.org. This can be found here:
https://github.com/isaacs/npmjs.org
It does require using CouchDB as the database, however, and that can be daunting. There are alternatives that allow you to do this. For example, reggie:
https://github.com/mbrevoort/node-reggie