Provide Node.JS webapp "key in hand" - linux

I am building a simple Node.JS application for a client. The webapp should be easy to deploy on each server instance (which are RedHat EL 6.3), "key in hand".
What is the best way to package a Node.JS app? Basically, I need an "installer" or "package" to:
Install Node.JS
Install the dependencies (npm install)
Populate the application files (CSS, JS, HTML, etc.)

You should deliver a self-contained package. Please check out the great site The Twelve-Factor App, specifically the build, release, run section. There is a lot of hard-won wisdom from experienced operations engineers embodied in that site.
In your app's repo, write a script (shell, node, whatever) that can generate a distributable archive
RPM or tar archive are the 2 most sensible choices for you. tar is more portable and simpler. RPM would integrate nicely with an RPM-based distribution. I would recommend starting with tar if you have not done a lot of software packaging/management work. RPM is significantly more complex than tar.
The tar archive should embed the node.js files within it. This will make your app easy to install and avoid sharing a system-wide node install thus creating artificial coupling. If you go the RPM route, you can specify node as a dependency in your RPM spec file (but you probably shouldn't - see below).
The archive should embed all of the npm dependencies as well. Don't run npm install at package install time. Consider using the npm shrinkwrap tool to manage your dependencies during development, but at deployment time they should be pre-bundled and ready to run.
Specifically, these are bad ideas you should avoid:
Do not download anything from the Internet during installation. This is brittle, slow, and potentially can throw you bad surprises including security problems
Do not build artifacts at install time that can be built at build time. So ship pre-build CSS files, requirejs optimized files, pre-compiled binaries, etc.
As to whether your application RPM should list node.js as a dependency or embed node into the RPM, here are some points to consider.
Embed node.js into your RPM
Single .rpm file to distribute
Allows your application to tightly control the node version it uses. (see below)
Higher reliability. The fact is your app is probably coupled fairly tightly to at least the minor version of node.js you develop on (0.8.x for example) or even a patch release (>= 0.8.12 < 0.9 for example). It's best to allow node.js to decouple your app from the OS, but don't be fooled into thinking your app will work reliably on a different version of node.js without testing & adjustment. Most commonly these days there's just 1 app running on the OS, and the notion of sharing node between apps incorrectly values conservation of disk space over proper decoupling and operational independence of applications.
It's unclear whether there are any official/reliable pre-built RPMs out there in yumland that will "just work".
Specify node.js as a dependency of your PRM
Follows the general philosophy of OS package management (avoid duplication, conserve disk space, etc)
RPM provides capabilities beyond TAR around inventory management, uninstallation, upgrade, etc. Since you are asking this question, you are probably not ready to address these properly yet, so you might want to start with tar and once you have a solid understanding of that, consider RPM upgrade scripts, etc.
The "single file to distribute" nice point can quickly become untenable once your app starts using a database or 3, supporting daemons for email, log aggregators, etc.

Related

How can I restore node modules for multiple platforms?

My Node application needs to be deployed on Windows and Linux. The main deployment package is built on a Linux CI server.
When this package is deployed to Windows, it crashes immediately due to missing native bindings, such as those for sqlite. Only the bindings for the build platform (Linux) are restored.
With a deadline approaching, we just set up a Windows build configuration which outputs a Windows specific package that contains the appropriate bindings, and we choose the appropriate artifact to bundle in the installer.
This works but feels fragile, as we would need to keep the Node versions in sync between the two otherwise unrelated environments. I would like to be able to do this with a single build configuration.
I couldn't find any guidance on how this is done. I'm imagining a command-line option like --platform=windows to npm ci, or a modification to package.json but I couldn't find any information about this. Presumably this is a reasonably rare requirement, and perhaps there is no tooling around this, which would be a shame.
Another requirement is that the application must be installed without an internet connection. We cannot run npm ci or npm install when we install it as some of our clients do not permit their servers to access the public internet.
Based on your requirements it sounds like building a package on each required platform would be the safest bet, with the least number moving parts to go wrong.
As the comments have suggested most projects rely on an npm install on the required platform so you are stepping into not that common territory.
This works but feels fragile, as we would need to keep the Node versions in sync between the two otherwise unrelated environments. I would like to be able to do this with a single build configuration.
Node uses NODE_MODULE_VERSION (displayed on the releases page) to track ABI compatibility for native modules. This only changes with a new major Node release number.
The CI builds would need to create app packages for each major version of Node you run on each platform. Keeping the Node.js major versions in sync for the application a good thing in any case. Running Node N and N-1 builds until that can be achieved is good cover and probably the best option with the air gap requirements.
NPM Cache
If the air gapped clients are largely on common networks, an NPM cache/proxy (nexus/verdaccio) may be of use. The NPM cache will need a process to snapshot the repo after a production npm install on all required platforms, to be pushed out to your endpoints. Unfortunately binary modules are often distributed out of band from NPM so won't be stored in regular NPM caches. Each client instance will need a complete build environment to build any native modules from source which can sometime present it's own difficulties on Windows platforms.
Alternatives
Node.js is not a great platform for distributing packaged applications to many diverse clients, especially if you need to distribute Node itself. Any language with an external VM requirement presents difficulties. Nodes package management choices and reliance on native modules exacerbate this.
I've given up in the past and converted clients (albeit thin) to Go, as it lends itself to cross platform distribution a lot better by removing the external runtime requirement and having less variables.

NodeJS app for end-user distribution

I'm looking for the proper way to distribute/deploy a node.js app that would run as a small webserver on the user's machine.
Is there a stub method or install script or a "install wizard" that would download all node_modules dependencies, download the latest nodejs binary, set up the environment, etc... or I have to distribute it bulk with everything packed? Is there any guide for that purpose?
Edited :
You could install node and npm, download your dependencies by running npm install in the command line (first declare them within your package.json) only then users can run your script. This is how you do development in Node.js, or deploy to a development server. See using npm. You could automate that with a shell script if that is what you are after.
However, when distributing programs to end-users that might not be the best approach. Linux users are used to a package (.deb for instance) and Windows users are used to an .exe or a setup wizard.
That is why I recommended the tools below. I also assumed you were targeting Windows as this is less of a problem is unix-like environments.
If you want a single file (.exe), pkg and nexe are made for that purpose. These Node.js tools are used by the developer to compile JavaScript code into a single executable binary that is convenient for end-users and Windows deployment. The resulting .exe file is very light and does not require node to be installed on the end-user’s computers.
Electron along with electron-packager can produce setup wizards, but it installs a lot of files even for the smallest program. Your program will include all of node and webkit, that is why it produces heavy installs.
NSIS can also create a setup wizard, it is simple and does common stuff well (copying files, running shell scripts).
Original answer:
Short answer is: not really.
You have to keep in mind that Javascript is and has always been interpreted, so until recently never compiled to binary as you might do with other languages. Some exploration has been going on, but essentially you won’t get a "good practice" answer.
The long answer is, maybe, for some limited use cases:
There is the fresh new pkg that does exactly this, and it looks promising.
There has been nexe for a while, it works great for some use cases (maybe yours). Native/compiled modules are still an issue however.
Electron might work for a full blown app with a significant user interface, but it is not light or compact.
You could always use browserify to concatenate and uglify all your code with the modules you use and then make an installer with something like NSIS to setup node and your script. Native modules would still be a problem however.

How to bundle a third party binary with Electron?

I am still new to the electron ecosystem and desktop development in general but what I wish to do is to interface with a third party, open source application that comes bundled in with my software. First, I am unsure on what the package options to distribute should be. Is it customary to have two downloads, one for users that already have the third party binary installed, and another one that includes it? Also how do I go about actually packaging, and installing the binary? Should this be an option on my package.json? What kind of script should I execute? Are there any npm modules to facilitate this?
edit - is it possible to invoke npm from my main.js even though a user has not previously installed node? I know node is bundled with the electron package but is npm too?
-The binary in this case is PostgreSQL
There are a couple of options coming to my mind.
Bundle a 3rd party installer w/ your app. This is what I did recently. On the first run I check if the service that I need is installed / running and if not I call the 3rd party installer / start it. When the installer quits I simply app.relaunch() and start consumig it. Of course you'll need installers for each platform you plan to support. And you'll have to figure out ways to check if the software is installed (properly) for each platform.
Bundle binaries w/ you app. Of course you can bundle pretty much anything w/ your electron app. Again, you'll need binaries for each platform you plan to support. And of course they shouldn't be linked to anything that the default user doesn't have on his machine like SDKs and additional headers ...
Less comfy but you can alway add some start-up message or before-download massage telling the user that he needs software xy in order to run your application.
Derivate of 1/2: Download required stuff on demand. For your example this would mean checking the user's OS and arch and then just download the required installers or binaries if available. You could also build the stuff on the user's machine although this probably being the worst/biggest/most complex solution.
Then there's things like https://www.npmjs.com/package/pg - you should always check npm if someone already built what you need ;)
I'd recommend using the great electron-builder which makes bundling stuff w/ your app a piece of cake.
Feel free to comment if you need more intel.

Installing Meteor packages globally

Is there a way to install meteor packages globally?
So, having the once globally installed packages installable without internet connection for projects created later, avoid repetitive downloading, and other benefits one may imagine.
Like in Node.js, using npm command (of Node Package Manager) with -g flag, npm install -g, doing so npm installs node packages into a global directory and when wanted to be loaded from javascript programs, loading from there if available, as well as looking in and loading packages from project's node modules folder.
Meteor already downloads packages into a global repository that all your local apps benefit off of.
So if you meteor add iron:router#1.0.7 it is downloaded and added to your project. Next time another project requires the same version, it is used off that same spot.
Also, there is a PACKAGES_DIR environment variable, when set, allows you to keep your own local packages centrally, so that you can share them among projects. In fact, you can keep that on a network drive (NFS) which your whole team can mount and consume centrally.
Yet, there is an inherent problem. Meteor's version resolver looks up for updates unless you pin down your package dependency versions so that is exactly why meteor seems to be so desperate to be connected.
Even if you pin your dependencies, the packages you depend on may not have (which apparently is the case for most packages) so Meteor keeps looking for updates to the whole package tree and downloads those that it deems satisfying the version constraint resolver.
The good news is, they are constantly improving their tooling, requiring lower number of lookups, faster builds, better search etc.
All in all, in essence, there is not much you can do unless Meteor provides some way of hosting an entire mirror of its package repository for you to consume offline. And I guess it is very unlikely to happen.
Meteor is a tool for the connected world and it does assume your connectivity. Heck, the whole journey begins with a curl https://install.meteor.com/ | sh
And yes, it would be great if we could hack away on a remote beach, or the 12 hour flight to that beach.
Until then, happy coding online ;)

Best practice for bundling third party libraries for distribution in Python 3

I'm developing an application using Python 3. What is the best practice to use third party libraries for development process and end-user distribution? Note that I'm working within these constraints:
Developers in the team should have the exact same version of the libraries.
An ideal solution would work on both Windows and Linux.
I would like to avoid making the user install software before using our own; that is, they shouldn't have to install product A and product B before using ours.
You could use setuptools to create egg files for your libraries, assuming they aren't available in egg form already. You could then bundle the eggs alongside your software, which would need to either install them, or ensure that they were on the import path.
This has some complexities, i.e. if your libraries have C-extensions, then your eggs become platform-specific, but in my experience this is the most widely-accepted means of 'bundling' stuff in Python.
I have to say that this remains one of Python's weaknesses, though; the third-party ecosystem is certainly aimed at developers rather than end-users.
There are no best practices, but there are a few different tracks people follow. With regard to commercial product distribution there are the following:
Manage Your Own Package Server
With regard to your development process, it is typical to either have your dev boxes update from a local package server. That allows you to "freeze" the dependency list (i.e. just stop getting upstream updates) so that everyone is on the same version. You can update at particular times and have the developers update as well, keeping everyone in lockstep.
For customer installs you usually write an install script. You can collect all the packages and install your libs, as well as the other at the same time. There can be issues with trying to install a new Python, or even any standard library because the customer may already depend on a different version. Usually you can install in a sandbox to separate your packages from the systems packages. This is more of a problem on Linux than Windows.
Toolchain
The other option is to create a toolchain for each supported OS. A toolchain is all the dependencies (up to, but not including base OS libs like glibc). This toolchain gets packaged up and distributed for both developers AND customers. Best practice for a toolchain is:
change the executable to prevent confusion. (ie. python -> pkg_python)
don't install in .../bin directories to prevent accidental usage. (ie. on Linux you can install under .../libexec. /opt is also used although personally I detest it.)
install your libs in the correct location under lib/python/site-packages so you don't have to use PYTHONPATH.
Distribute the source .py files for the executables so the install script can relocate them appropriately.
The package format should be an OS native package (RedHat -> RPM, Debian -> DEB, Win -> MSI)
For developers use PIP with requirements file.
For end users, specify requirements in setup.py.

Resources