I'm trying to deploy my MEANjs application into production...
So far I've use jenkins, git,rsync, etc. to copy the project to the remote server.
and in the final step I just have to call
stop myMeanjsApp
Replace the folder with the new version of the application
Call start myMeanjsApp
but that would mean a downtime which I'm trying to avoid so
1. how can I avoid this?
2. Are there any good practice work-flows for this ?
I've seen this but I'm not sure if its the way to go
or is there any other simple way of doing this?
Typically a large scale web application is upgraded by creating new virtual machines running an upgraded version of the software. The new virtual machines are then added to the load balancer (manually or automatically). Then the virtual machines running the old version are removed from the load balancer pool and when all in-progress requests to the old vms are done, the vms can be destroyed. E.g. AWS features like ELB and auto scaling groups makes this an attractive way to upgrade software.
You could do the same even if you have a single server by starting the new version into a different port.
The naught npm module is fair approach if you must replace the code in-place.
For some applications it might be an option to simply stop accepting new connections and restart with the new version when the last connection is done.
And for some applications you can just kill the old version and start new version at any time. It all depends on your requirements and environment.
Related
Our app is in prod mode now.
How can I update the server code?
Issue: It might have a chance that someone is accessing a server or doing some CRUD.
It may impact the data and we have some payment-related things as well.
A reasonably easy and straight-forward way is a blue-green deployment. (Unlike what the Wikipedia article says, the instances don't need to be physical servers, just app instance listening on multiple ports on the same server, for instance.
)
Assuming you'd be upgrading from version 1 to version 2:
Your frontend load balancer (e.g. ELB) directs traffic to version 1.
You deploy version 2.
You configure ELB to start directing traffic to version 2 and stop directing to version 1.
Once there is no traffic to version 1, you shut it down.
Extra care must of course be taken if version 2 involves e.g. database schema changes that aren't compatible with version 1.
We currently have TDWC (8.5.1) stood up on a Linux server.
(A very OLD Linux server that doesn't have much horsepower).
It's working fine, but slow. We need to upgrade it, and 9.2
is as high as we can go due to our Service Provider limitations.
Instead of upgrading it in place, I would like to install it on a
brand new Windows 2012-R2 server that was provisioned just for Workload Automation tools. I've scoured the manuals and the forums and I don't see anything that addresses this specifically. I assume this install would be handled as a brand new install and not an upgrade as far as the server goes.
My question/concern is about the Started Task and Parmlib on the Mainframe. As long as I am using the same host & port on the mainframe, and the z/OS Connector, wouldn't it be as simple as shutting down the old TDWC and starting up the new 9.2 DWC release? Wouldn't it connect to the same Started Task as the current release does?
The SERPTDWC member on the mainframe
contains the following...
/* TCPIP ZCONNECTOR SERVER
SERVOPTS SUBSYS(TWSC)
PROTOCOL(TCP)
USERMAP(USERS)
TCPOPTS TCPIPJOBNAME(NETITCP)
HOSTNAME(DPSMVS1.EDS.EXPRESS-SCRIPTS.COM)
SRVPORTNUMBER(31121)
INIT CALENDAR(DEFAULT)
There is no problem in running multiple connectors connect to the same server started task, this is the standard configuration when running a DWC cluster.
This does not require any change to SERVOPTS or TCPOPTS, the only check to do is to verify that the users authenticating on the new connector are correctly mapped in the USERMAP, the new connector will present the users with new connector name and you may need to add them in the USERS parameter member
There are several packages that can update node js app in production mode with zero downtime (or graceful reload), for example pm2.
However, is it possible to update node js itself, for example from the LTS version 4.x to the new version 6.x in production with zero downtime?
Updating production with zero downtime can be done with whatever you want to update as long as it has redundancy. We do it every day at work.
Upgrading node is routine as long as installing node is part of your deployment process, using for example nvm
You need several servers of course.
Prerequisite : Suppose your code is version 1.0, upgrade node.js in dev, test your code, persist the required node version (package.json, .nvmrc or whatever your install script need) and bump it to 1.1 Extra check that your server OS has node 6 requirements. (for ex. Centos 6 can't install node 4)
Generic zero-downtime rolling-deploy procedure, supposing you have 4 servers in a farm :
remove the server 1 from the farm. If you use persistent
connections (websocket), signal existing clients of this
server to reconnect (to servers 2, 3 and 4)
deploy version 1.1 to the new server. This should include the reinstallation of node (ex. nvm install) Connect directly to it
and check everything is ok.
do the same with server 2 (remove from farm, signal clients, deploy new version) so we don't have a single point of failure. Your app is still being served by servers 3 and 4
Put back servers 1 and 2 in the farm and remove 3 and 4. Signal 3/4 clients to reconnect if needed.
upgrade servers 3 and 4 and put them back in the farm
Done. All the servers are upgraded and the clients didn't notice anything if your app is well coded.
To do zero-downtime you need to make sure someone is handling user requests all the time, so at the exact moment you kill the old version's process, new one has to already be bound to the same port with the same app.
So the most important part of zero-downtime in this case is load balancing. For that, you should have a HAproxy, nginx etc. installed in front of your app with multiple node processes, so that you can kill and replace each one separately.
In general, running one node process for all your production is not a good idea, so if you don't have any load balancer do install one even at a cost of some downtime. Then put at least two separate servers (or containers) behind it and run your app with pm2 on it.
Also, notice that even with redundancy your app will interrupt clients if you just restart one node - eg. if someone is uploading a file while this happens.
HAproxy has a feature where you can disable one of the backends, but wait till all the http connections to it finish in a natural way while haproxy is not sending any new traffic to it. That's how you make sure no user is interrupted.
Example configuration here: https://blog.deimos.fr/2016/01/21/zero-downtime-upgrade-with-ansible-and-haproxy/
Apart of rolling update mentioned here, there is another simple & obvious technique to update the app with zero downtime, called blue/green deployment.
You just spin up entire copy of your existing infra let's say blue, perform required updates there. When new blue infra is ready & tested - switch traffic to it and make old green infra idle.
This way "if something went wrong" you always have safe option to roll back to old infra.
When you sure that new blue worked OK under the load - just remove old green version.
I would suggest this way for complex operations, instead of touching live production.
More: https://cloudnative.io/docs/blue-green-deployment/
I'm looking for ways in which to deploy some web services into production in a consistent and timely manner.
I'm currently implementing a deployment pipeline that will end with a manual deployment action of a specific version of the software to a number of virtual machines provisioned by Ansible. The idea is to provision x number of instances using version A whilst already having y number of instances running version B. Then image and flick the traffic over. The same mechanism should allow me to scale new vms in a set using the image I already made.
I have considered the following options but was wondering if theres something I'm overlooking:
TGZ
The CI environment would build a tarball from a project that has passed unit tests and integration tests. Optionally depednencies would be bundled (removing the need to run npm install on the production machine and relying on network connectivity to public or private npm repository).
My main issue here is that any dependencies that depend on system libraries would be build on a different machine (albeit the same image). I don't like this.
NPM
The CI environment would publish to a private NPM repository and the Ansible deployment script would check out a specific version after provisioning. Again this suffers from a reliance on external services being available when you want to deploy. I dont like this.
Git
Any system dependent modules become globally installed as part of provisioning and all other dependencies are checked into the repository. This gives me the flexibility of being able to do differential deployments whereby just the deltas are pushed and the application daemon can be restarted automatically by the process manager almost instantly. Dependencies are then absolutely locked down.
This would mean that theres no need to spinning up new VM unless to scale. Deployments can be pushed straight to all active instances.
First and foremost, regardless of the deployment method, you need to make sure you don't drop requests while deploying new code. One simple approach is removing the node from a load balancer prior to switchover. Before doing so, you may also want to try and evaluate if there are pending requests, open connections, or anything else negatively impacted by premature termination. Or perhaps something like the up module.
Most people would not recommend source controlling your modules. It seems that a .tgz with your node_modules already filled in from an npm install while utilizing a bundledDependencies declaration in your package.json might cover all your concerns. With this approach, an npm install on your nodes will not download and install everything again. Though, it will rebuild node-gyp implementations which may cover your system library concern.
You can also make use of git tags to more easily keep track of versions with specific dependencies and payloads. Manually deploying the code may get tedious, you may want to consider automating the routine while iterating over x amount of known server entries in a database from an interface. docker.io may be of interest.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
There are a bunch of managed cloud based hosting services for nodejs out there which seem relatively new and some still in Beta.
Yet another path to host a nodejs app is setting up a stack on a VPS like Linode.
I'm wondering what's the basic difference here between these two kinds of deployment.
Which factors should one consider in choosing one over another?
Which one is more suitable for production considering how young these services are.
To be clear I'm not asking on choosing a provider but to decide whether to host on a managed nodejs specific hosting or on an old fashioned self setup VPS.
Using one of the services is for the most part hands off - you write your code and let them worry about managing the box, keep your process up, creating the publishing channel, patching the OS, etc...
In contrast having your own VM gives you more control but with more up front and ongoing time investment.
Another consideration is some hosters and cloud providers offer proprietary or distinct variations on technologies. They have reasons for them and they offer value but it does mean that if you want to switch cloud providers, it might mean you have to rewrite code, deployment scripts etc... On the other hand using VMs with standard OS as the baseline is pretty generic. If you automate/script/document the configuration of your VMs and your code stays generic, then your options stay open. If you do take a dependency on a proprietary cloud technology then it would be good to abstract it away behind an interface so it's a decoupled component and not sprinkled throughout your code.
I've done both. I did the VM path recently mostly because I wanted the learning experience. I had to:
get the VM from the cloud provider
I had to update and patch the OS
I had to install and configure git as a publishing channel
I had to write some scripts and use things like forever to keep it running
I had to configure the reverse http-proxy to get it to run multiple sites.
I had to configure DNS with the cloud provider, open ports for git etc...
The list goes on. In the end, it cost me more up front time not coding but I learned about a lot more things. If those are important to you, then give it a shot. If you want to focus on writing your code, then a node hosting provider may be for you.
At the end of it, I had also had more options - I wanted to add a second site. I added an entry to my reverse proxy, append my script to start up another app with forever, voila, another site. More control. After that, I wanted to try out MongoDB - simple - installed it.
Cost wise they're about the same but if you start hosting multiple sites with many other packages like databases etc..., then the VM can start getting cheaper.
Nodejitsu open sourced their tools which also makes it easier if you do your own.
If you do it yourself, here's some links that may help you:
Keeping the server up:
https://github.com/nodejitsu/forever/
http://blog.nodejitsu.com/keep-a-nodejs-server-up-with-forever
https://github.com/bryanmacfarlane/svchost
Upstart and Monit
generic auto start and restart through monitoring
http://howtonode.org/deploying-node-upstart-monit
Cluster Node
Runs one process per core
http://nodejs.org/docs/latest/api/cluster.html
Reverse Proxy
https://github.com/nodejitsu/node-http-proxy
https://github.com/nodejitsu/node-http-proxy/issues/232
http://blog.nodejitsu.com/http-proxy-middlewares
https://github.com/nodejitsu/node-http-proxy/issues/168#issuecomment-3289492
http://blog.argteam.com/coding/hardening-node-js-for-production-part-2-using-nginx-to-avoid-node-js-load/
Script the install
https://github.com/bryanmacfarlane/svcinstall
Exit Shell Script Based on Process Exit Code
Publish Site
Using git to publish to a website
IMHO the biggest drawback of setting up your own stack is that you need to manage things like making Node.js run forever, start it as a daemon, bring it behind a reverse-proxy such as Nginx, and so on ... the great thing about Node.js - making firing up a web server a one-liner - is one of its biggest drawbacks when it comes to production-ready systems.
Plus, you've got all the issues of managing and updating and securing your server yourself.
This is so much easier with the hosters: Usually it's a git push and that's it. Scaling? Easy. Replication? Easy. ...? Easy. All within a few clicks.
The drawback with the hosters is that you can not adjust the environment. Okay, you can probably choose which version of Node.js and / or npm to run, but that's it. You have no control over what 3rd party software is installed. You've got no control over the OS. You've got no control over where the servers are located. And so on ...
Of course, some hosters allow you access to some of these things, but there is rarely a hoster that supports all.
So, basically the question regarding Node.js is the same as with each other technology: It's a pro vs con of individualism, pricing, scalabilty, reliability, knowledge, ...
I personally chose to go with a hoster: The time and effort I save easily outperform the disadvantages. Mind you: For me, personally.
This question needs to be answered individually.
Using Docker is another way to simplify the setup on single Linux VPS. With Docker both development and production setups are faster, more robust, and more secure.
The setup is faster and more robust because you will be deploying ready Node.js image at once, without running any installation scripts. And it would be more secure because internal dependencies, such as database, can be hidden from outside world completely and accessible only from Docker internal network. On top of it, Docker significantly simplifies the upgrade process for underlying OS and Node.js runtime.
There are two ways to setup Node.js Docker environment. The first one – follow the instruction published here how to dockerize your application and deploy it with Docker, alongside with databases when needed. The guide gives the instructions for the development setup, the production setup will be similar.
Another way would be deploying official Node.js docker image and mounting application code as a volume or a folder to Node.js image. That would allow to update Node.js image going forward without re-building and re-deploying the application. Such approach solves long-standing problem with security patching of Docker images.
To help out with the setup of Docker on single machine - you can use Abberit Admin Panel. It will set up Node.js environment for you with a click of a button, including databases if you need them. The tool is free, and you can turn it off after you have completed initial setup. On the other hand, if later you decide to reduce maintenance tax of the production - you can migrate into managed service without any changes in the app.
Disclaimer: I am one of the founders of Abberit.