I wonder if my way to deploy of node.js application is good or can be improved and how, and if there are some best practices.
Consider it's a large application and can contains unhandled exeption that will make the node server crash (even using unit test, we're not sure it's 100% crash safe) I use forever for make the server always running (I could also use pm2, but the pourpose it's the same). So I build a systemd script for use this as service like "service nodeapp start|stop|status"
I think the best solution it's run node "raw" without forever or pm2, still using systemd, but I think the risk of crashes it's too high.
The server it's behind a nginx proxy, and I also added logrotate script for log maintenance.
Any advice and suggestions will be greatly appreciated
Thanks
There are plenty of checklists both on the internet and here on stackoverflow.
My personal checklist is:
setting node_env to staging
run all the unit tests
deploy on a staging env
run all the scripts on the server
if tests passes change node_env on prod and deploy
If you want to skip some tests and be sure that the environament is the same on dev and prod there is a fairly new things called docker but in the first time ot will add some complexity.
Hopefully when you have all setted down with docker it should not be too much intrusive.
Ps. Use forever, if you're using docker you could restart the container but it's not that great
Related
I have been using forverjs for my server but for some reason the server stopped and the server didn't restart again. Is foreverjs reliable?
Should i use any other libs?
Found there are many libs like pm2, nodemon, upstart, systemd, nginx. Which one should ensure my application running all the time. also can these tools handle large loads of requests?
There are multiple questions within your question to analyze.
Is foreverjs reliable?
forever is a very popular package. As seen on GitHub, it has 75 contributors and 636 commits. This question is mainly opinion-based, but 9/10 (maybe 10/10) experienced developers would say it's reliable for it's purpose (I expand below).
Should i use any other libs?
Reliability is achieved through sturdy software design and not just packages you choose. I have used forever and pm2 production processes for years without any problems on their end. They include great features for reliability, such as attempting to restart your application if it crashes. Packages are not supposed to fix terminal bugs in your code.
Found there are many libs like pm2, nodemon, upstart, systemd, nginx.
Which one should ensure my application running all the time.
This can be found through reading their GitHub descriptions. I use nodemon for quickly testing code as it's written. For example, I start the nodemon process and it begins my Node.js process. When I edit my code and press save, the Node.js process is automatically stopped and restarted with the new code. nodemon should not be used alone for a long-running production server, since it will stop when you exit your shell. pm2 and forever are effective libraries and you can investigate upstart, systemd, and nginx if necessary.
Regarding #Kalana Demel's answer, I consider using forever to run nodemon as using forever in my explanation above.
how to ensure my application is reliable all the time
For an overall answer to your question, you should be writing tests to ensure your code is reliable. If you've written effective unit and integration tests, choosing a package to run the process will be trivial (and not related to reliability), since you should not expect it to crash.
nodemon is a good option, you can use a combination of forever and nodemon using,
forever start -c nodemon app.js
Also in my experience forever is very reliable, try
forever logs app.js
to see what exactly caused the error
pm2 is good option in these cases, I personally use pm2 in all my node.js servers, it provides alot of more important functionalities in comparison to others.
One of the best thing about it can be easily integrated with keymetrics/newrelic for analytics of the server.
Also pm2 will give you cpu/memory usage, you can even configure the restart limit and interval.
I am running a Node.JS + Angular JS application on a cloud server using the MEAN stack. The application is terminating every hour or sooner.
I have few thoughts and would like someone who can tell me which might a cause.
I am using SSH through root when I start the service using this command
NODE_ENV=production PORT=80 grunt serve:dist
Do I need forever to run this properly ?
Should I use a server user (similar to apache) that can run the application?
If yes how do I do this ?
We do not have a deployment engineer in our team but it is annoying to not being able to keep the app running on the server after developing the application. Please help diagnose the problem.
If you don't want to use a deployment service — MS azure, AWS, heroku, etc. (which would probably be a lot easier) — then yes, you would have to use something like forever to restart your sever every time it crashes. It's really odd that your app terminates after an hour though, it'd be helpful if you could describe why that's happening.
I'm looking for a Docker-based project setup which enables:
Development environment to most closely match production
Best of breed workflow automation tools for all developers
Highly portable/quick to set-up development environment, which supports Linux, OSX, and Windows. Currently we use Vagrant and that seems to be the most obvious choice still.
To satisfy #1:
Same app container (node.js + Apache) for dev, test, staging and production
Do not add any custom workflow tools to the container just for development's sake
To satisfy #3:
Do not require developers to all install their own dev tools for their respective environments/OSes (e.g. getting them to install node.js, npm, grunt, etc within the host)
So then to still satisfy #2, the idea I have is:
have a second "dev" container which shares files with the node/apache container and runs all the workflow automation.
run all the grunt watch/rebuild/reload/browser-sync etc from within that.
If using Vagrant, the file sharing would essentially go as host->dev container->app container
Are there any flaws in the above model, or perhaps better ideas?
One potentially missing point is whether to - and if so then how to - avoid performing a full build of containers in production each time. Without risking a mismatch of production vs other containers, I'd like to "package up" the container so that when new code is pushed to production, the app server only needs to restart, instead of npm install, etc. Particularly, once we're pushing to production, it should no longer have to pull anything from third party servers in order to run.
This is a bit broad question where answers will be opinionated rather then backed by objective arguments, but here's what I would change there:
Node.js is fine, but I would choose nginx instead of Apache. Both Node.js and Nginx are event-based and allow much more throughput, which is one of advantages of Node.js. But this might vary, like if you need certain Apache-only modules, but Nginx seems more natural to put in front of Node.
Why do you want to have a separate container? To minimize the production container by it not having to have dev tools?
I really think that having, say, grunt.js in the production container not too heavy, but again, you seem to try to minimize impact. Anyway, alternatively you can have both code and grunt watch etc inside one container and deploy like that. Pros are that you're simplifying setup, cons are that your production build might install a few extra libs. Which you can mitigate by, for example, setting NODE_ENV to production when deploying production container so that on startup, your scripts will know not to load certain dev tools.
I get the impression, though it's not explicitly stated anywhere, that using pserve on my Pyramid app when it's deployed to production is not the best idea. I don't know that it deals with concurrency, for example -- and I suspect it doesn't at all. I don't know if paster is right, either.
For context: I have a Pyramid app with a PasteDeploy configuration file, which I can serve up using a command like pserve config.ini. So, in production, would I just run that command as a daemon and reverse-proxy it through nginx?
What's the best practice here?
pserve is just an application loader and server runner. It's capable of launching many different WSGI servers (one of which you need to select for deployment). There are few WSGI servers that cannot be run via pserve (the main one coming to mind is Apache's mod_wsgi).
As far as production, the main thing you want is reliability, which supervisor can greatly help with. You'll want to look at the nginx deployment recipe, but the cookbook actually has recipes for several different deployment scenarios which you will need to evaluate based on your current infrastructure.
Is there a good way to run and manage multiple nodejs apps on a single server?
I've been looking at haibu and nodester, but they seem a little complex for what I am trying to do.
I also was looking at forever and I think that may work with the config file and web gui, but I am not sure how I am going to handle passing the port information via ENV or arguments.
I use Supervisord & Monit, more details and configuration example here: Process Management at Bringr.
Moreover you can specify environnement variable directly from the supervisord configuration file (see sub-process environment). But I personally prefer to add these variables directly inside a ~/.bashrc on each machine.
If the port number isn't going to change for each application (but change between production & development environment). I'll recommend to specify them inside a config.json (or directly inside package.json). And the config.json will contain a different port number for each application depending on the environnement:
{
myapp:{
production:{port:8080},
development:{port:3000}
}
}
And inside myapp.js:
var config = require('./config');
app.listen(config.myapp[process.env.NODE_ENV].port)
With process.env.NODE_ENV declared in ~/.bashrc.
I wrote an app nodegod that I use for a handful deployments of maybe 10 apps each.
nodegod reads an app list from json. It has an internal state machine for each app that handles the app life cycle in a safe manner including restarts, and the web page features stop/start/debug.
The web interface uses web sockets, so that you can manage remote servers over ssh.
As you deploy over rsync, apps restart automatically.
Because nodegod monitors the stdout of other apps, you can capture an app's final breath like segfault and malloc errors.
I use a fork of http-proxy in front of a slew of express instances, so any number of apps can share a single server port per dns for both http and web sockets.
I wrote a haraldops module to read app configuration from outside the source tree. With that you can monitor and get emails whenever something's up with an app.
App configurations I keep in a git repo in the file system.
It's not rocket science, and it all fits very nicely together. Only node and json: SIMPLE gets more done.
If your server has upstart, just use it. I have no luck with forever and similar.
If you want to proceed with upstart, roco would be nice as deployment solution:
roco deploy:setup:upstart
roco deploy
We're constantly trying to improve forever and haibu at Nodejitsu. Seems like the approach you're looking for here is a .forever configuration file for complex options. This feature has been on our backlog for a while now
https://github.com/nodejitsu/forever/issues/124
Check back. I consider it pretty high priority after the next round of performance improvements.
These days I've taken to using dokku which is a OSS clone of heroku. Deploying is as simple as making sure your package.json contains a start script. For example:
"scripts": {
"start": "node index.js"
}
Sample App