Running and managing nodejs applications on single server - node.js

Is there a good way to run and manage multiple nodejs apps on a single server?
I've been looking at haibu and nodester, but they seem a little complex for what I am trying to do.
I also was looking at forever and I think that may work with the config file and web gui, but I am not sure how I am going to handle passing the port information via ENV or arguments.

I use Supervisord & Monit, more details and configuration example here: Process Management at Bringr.
Moreover you can specify environnement variable directly from the supervisord configuration file (see sub-process environment). But I personally prefer to add these variables directly inside a ~/.bashrc on each machine.
If the port number isn't going to change for each application (but change between production & development environment). I'll recommend to specify them inside a config.json (or directly inside package.json). And the config.json will contain a different port number for each application depending on the environnement:
{
myapp:{
production:{port:8080},
development:{port:3000}
}
}
And inside myapp.js:
var config = require('./config');
app.listen(config.myapp[process.env.NODE_ENV].port)
With process.env.NODE_ENV declared in ~/.bashrc.

I wrote an app nodegod that I use for a handful deployments of maybe 10 apps each.
nodegod reads an app list from json. It has an internal state machine for each app that handles the app life cycle in a safe manner including restarts, and the web page features stop/start/debug.
The web interface uses web sockets, so that you can manage remote servers over ssh.
As you deploy over rsync, apps restart automatically.
Because nodegod monitors the stdout of other apps, you can capture an app's final breath like segfault and malloc errors.
I use a fork of http-proxy in front of a slew of express instances, so any number of apps can share a single server port per dns for both http and web sockets.
I wrote a haraldops module to read app configuration from outside the source tree. With that you can monitor and get emails whenever something's up with an app.
App configurations I keep in a git repo in the file system.
It's not rocket science, and it all fits very nicely together. Only node and json: SIMPLE gets more done.

If your server has upstart, just use it. I have no luck with forever and similar.
If you want to proceed with upstart, roco would be nice as deployment solution:
roco deploy:setup:upstart
roco deploy

We're constantly trying to improve forever and haibu at Nodejitsu. Seems like the approach you're looking for here is a .forever configuration file for complex options. This feature has been on our backlog for a while now
https://github.com/nodejitsu/forever/issues/124
Check back. I consider it pretty high priority after the next round of performance improvements.

These days I've taken to using dokku which is a OSS clone of heroku. Deploying is as simple as making sure your package.json contains a start script. For example:
"scripts": {
"start": "node index.js"
}
Sample App

Related

Deploying Next.js to Apache server

I've been developing a Next.js website locally and now want to set it up on my Apache server (with cPanel). However, I'm very new to Next.js and Node apps and not too sure how to go about it.
Has anyone done this successfully? Can you list the required steps and what files should be on the server?
Also, can this be done on a subdomain?
Thank you!
To start with some clear terms just so we're on the same page, there are two or three very different things people mean when they say "server":
A Server Machine is a computer that is connected to the internet that you intend to use to serve something to people on the internet.
A Server Program is some software you run on your Server Machine. The job of the Server Program is to actually calculate the responses to various requests.
A Server as a Service is a webapp provided by a company that stores your code and then puts it onto Server Machines with the right Server Program as needed.
While we're here, let's also define:
A Programming Language is the language your website is written in. Some sites have no language (and are just raw HTML/CSS files that are meant to be returned directly to the user). Many sites, though, have some code that should be run on the server and then the result of that code should be returned to the user.
In your case, you have a Machine whose condition we don't know other than that it is running the Program Apache (or probably "Apache HTTP Server"). Apache HTTP server is very old and proven and pretty good at serving raw files back to users. It can also run some Programming Languages like PHP and return the result.
However, Next.JS is built on top of the Programming Language Javascript, which Apache does not have the ability to run. Next.JS instead wants its Server Program to be Node.
So the problem here is basically that you have a hammer, but only screws. You can't use the tool you have, Apache, to solve the problem you need solved, running Node code and returning the result. To get around this you have two options:
First, you can find a way to access the Server Machine that is currently running Apache and tell it, instead, to run Node pointed at your Next.JS code whenever it starts up. This might not be possible, depending on who owns this machine and how they've set it up.
Second, and probably easier, is to abandon this Machine and instead use a Server as a Service. Heroku, AWS, and Netlify all support Next.JS and have a free tier. The easiest solution, though, is probably to just deploy it on Vercel, which is a Server as a Service run by the same team that makes Next.JS and which has a very generous free tier for you to get started with.
The good news, though, is that yes next.js does totally support being hosted from a subdomain.
Next.JS allows you to build fully functional Node Applications, as well as simple statically-generated sites like Jeckyl or Docpad. If your use case is a simple statically generated site look here: https://nextjs.org/docs/advanced-features/static-html-export
In particular the next build && next export command will create all the HTML and assets necessary to host a site directly via an HTTP server like Apache or Ngnix. Contents will be outputed to an out directory that could serve as the server root.
Pay very close attention to what features are not supported via this approach.

Use Google App Engine or Google Cloud Compute VM to Test Run My App?

I'm moving my Three.js app and its customized node.js environment, which I've been running on my local machine to Google Cloud. I want to test things out there, and hopefully soon get some early alpha testing going with other people.
I'm not sure which is the wiser way to go... to upload the repo I've been running locally as-is onto a VM which users would then access via the VM's external IP until I get a good name to call this app... or merge my local node.js environment with what's available via the Google App Engine and run it on GAE.
Issues I'm running into with the linux VM approach... I'm not sure how to do the equivalent on the VM of what I've been doing locally. In Windows Powershell I cd into the app directory and then enter node index.js. I'm assuming by this method of deployment that I can get the app running as soon as the browser hits the external IP. I should mention too that the app will allow users to save content as well as upload images, and eventually, 3D models as well as json datasets.
Issues I'm running into with the App Engine approach: it looks like I only have access to a linux-based command line, and have to install all the node.js modules manually. Meanwhile I have a bunch of files to upload, both the server-side node files and all the frontend stuff. I don't see where to upload those files, and ultimately what I'd like to do is have access to a visual, editable file-tree interface, as I have in Windows and FileZilla, so I can swap files in and out, etc. Alternatively I suppose I could import a repo from Github? Github would be fine as long as I can visually see what's happening. Is there a visual interface for file structure available in GAE somewhere? Am I missing something?
I went through the GAE "Hello World" tutorial and that worked fine, but was left scratching my head afterward regarding how to actually see and edit the guts of the tutorial app, or even where to look for the files.
So first off, I want to determine what's the better approach, and then if possible, determine how to make the experience of getting my app up there and running a more visual, user-friendly experience.
Thanks.
There are many things to consider when choosing how to run an app, but my instinct for your use case is to simply use a VM on GCE. The most compelling reason for this is that it's the most similar thing to what you have now. You can SSH into the machine and run nohup node index.js & (or node index.js inside tmux/screen if you prefer) and it will start the app and not stop it when you log out of SSH. You can use SCP / SFTP with whatever GUI client you want to upload files. You don't have to learn anything new! If you wanted to, you could even use a Windows VM (although I think you have to pay a little more than for a comparable Linux VM due to the licensing fees).
That said, the other way is arguably more "correct" by modern development standards, but it will involve a lot more learning that will prevent you from getting your app running somewhere other than your laptop in the short term:
First, you'll need to learn about Docker and stateless containers, which is basically what your app runs inside of on AppEngine.
Next, you'll need to learn how to hook up a separate stateful service (database, file server, ...) to your app's container so you can store your files, etc. in it, and then probably rewrite your app somewhat to use it to store stuff.
Next, you'll probably want some way to automatically deploy this from code instead of manually doing it, which gets you into build systems, package managers, artifact storage, continuous integration systems, and on and on and on.
This latter path is certainly what you should choose for a long-running production service if you work with a big team of developers -- but that doesn't mean that it's necessarily the right path for your project today. If you don't care about scaling up automatically, load balancing between nodes, redundant copies of your app running in different regions in case there's a natural disaster, etc., then go with the easy way for now, and you can learn new ways to improve the service when they're actually needed.

running nodejs app inside go

I have a requirement. Is there a way to run nodejs apps inside golang? I need to wrap the nodejs app inside a golang application and in the end to result a golang binary that starts the nodejs server and then to be able to call nodejs rest endpoints. I need to encapsulate in the golang binary the entire nodejs application with nodem_odules, if necessarily the nodejs runtime.
Well, you could make a Go program that includes e.g. a zipped Node application that it extracts and starts but it will be very hard to do well - you will have huge binaries, delays in extracting files, potential portability problems etc. Usually when you want to call REST endpoints then you host your Node app on some server and you let the client app (the Go app in your example) to connect to that Node app to work correctly. Advantages are that it is much faster, the app is much smaller, you don't have portability issues with Node binaries and addons and you can quickly update your backend any time you want.
It will be a very bad idea to embed a nodejs app into your golang, for various reasons such as: size, security updates pushing, etc.
However, if you so strong feel that they should be together, you could easily create a docker container with these two (a golang server + a node app) and launch them via docker. You can set the entrypoint to a supervisord daemon so that your node server as well as the golang server can be brought up when your container is run.
If you are planning to deploy via kubernetes you can create two individual docker containers (one for the golang server, one for the node server) but deploy them always together as a pod too.
There are multiple projects to embed binary files and/or file system data into your Go application.
Look at 'Alternatives' section of project 'vfsgen':
https://github.com/shurcooL/vfsgen#alternatives

Where to run node.js

So I thought about giving node.js a try seeing the possibilities it has for a little test chat project (with mysql) I'm doing.
But what I couldn't find out is where to run the file from and whats most common.
What I currently have:
A FreeBSD server with latest Node and PHP 5.3.x
A vhost
some tutorials on how to start with node (which I looked through and got exited about)
knowledge on how to run it from terminal without having to keep my terminal open (screen)
So far so good.
What I need:
Some basic information of where to put the (lets say:) chat.js file.
Most logical port to run it on
So the web root (www) runs on a user (not root obviously). And the webroot has an underlying folder where I could put the script (away from visitors grabby little hands). This seems to me to be the most safe place to put it seeing nobody can get to it, which is probably what I want seeing I'm going to connect to a db and don't want my DB login data out there (I don't know how this works yet but I'll figure out db connect with node later, no answer required).
But if a file is not in the webroot, it seems to me a connection cannot be made from outside. Cause my webroot is configured to only allow 80 (or ssl on 443) incomming traffic, which is logical. Outgoing obviously has no problems.
All the examples that I found don't really help me. They just do everything on a local machine, which sucks for me cause I don't want to do that.
So what I would like to is the best practice for:
Where to put the file
port to run it on.
What is Node.js?
A lot of the confusion for newcomers to Node is misunderstanding exactly what it is. The description on nodejs.org definitely doesn't help.
An important thing to realize is that Node is not a webserver. By itself it doesn't do anything. It doesn't work like Apache. There is no config file where you point it to you HTML files. If you want it to be a HTTP server, you have to write an HTTP server (with the help of its built-in libraries). Node.js is just another way to execute code on your computer. It is simply a JavaScript runtime.
A nice tutorial How to Deploy Node JS Applications, With Examples
You'll need to have your non-node application on port 9000 (for
Apache, this will be in /etc/apache2/ports.conf and in your
sites-available file for your site), and you'll need your node
application to listen on 8080. You'll also need to set up DNS 'A'
records for the different hostnames you'll be using for your servers.
Companies like Heroku allow for automated deployment of apps from the desktop to the cloud.
Nodejitsu provides a tool called jitsu that makes deploying an Node.js application super simple. You can install jitsu with npm.
npm install jitsu -g
Heroku How To
Getting started with jitsu
Use monit, forever, upstart or systemd to start your node server. Use Varnish or HAProxy or Nginx (Nginx not work with websockets).
Ultimately you can stick it anywhere you want. I recommend running your application using Forever or similar instead of directly with Node. I usually keep my apps in /var/ and let each one run under a unique user. I do not recommend keeping them in your http root, as your .js files are should NOT be interpreted by Apache, php, etc.
To be clear - you do NOT need a traditional webserver, nor do you need php,mySQL or anything else. Node is all you need. It'll serve content directly - it IS the webserver.
Often times you'll have each app use a high port number (3000+) and use NGINX to proxy them all to different hostnames off of port 80 (allowing you to easily have multiple apps on a single machine). If you aren't using some sort of proxy, then 3000 is very default, but there is no correct or incorrect port, so long as you don't use a reserved port.

Is 'pserve' inappropriate to serve my Pyramid app for production use behind nginx?

I get the impression, though it's not explicitly stated anywhere, that using pserve on my Pyramid app when it's deployed to production is not the best idea. I don't know that it deals with concurrency, for example -- and I suspect it doesn't at all. I don't know if paster is right, either.
For context: I have a Pyramid app with a PasteDeploy configuration file, which I can serve up using a command like pserve config.ini. So, in production, would I just run that command as a daemon and reverse-proxy it through nginx?
What's the best practice here?
pserve is just an application loader and server runner. It's capable of launching many different WSGI servers (one of which you need to select for deployment). There are few WSGI servers that cannot be run via pserve (the main one coming to mind is Apache's mod_wsgi).
As far as production, the main thing you want is reliability, which supervisor can greatly help with. You'll want to look at the nginx deployment recipe, but the cookbook actually has recipes for several different deployment scenarios which you will need to evaluate based on your current infrastructure.

Resources