push updates to gcloud containers/pods - node.js

Went through to tutorial without any issues but I'm confused on the best way to push updates to the application. The tutorial mentions a bit about kubectl rolling-update but not really following that. Any feedback on exact steps to use after deploying the app?

You should use the kubectl rolling update command.
For a bit of background, imagine that you have an application that is running in 10 pods. Now you have a new version of your application. You don't want to stop the current version and then start the new version, because you will have a period of time where you aren't serving any user traffic. And if there is an issue with the new version, that period of time may be quite long as you push the new version, detect an issue, remove the new version, and restart the old version. A rolling update will replace your pods one at a time with a new pod running the updated version of your application. This allows you to gradually shift incoming requests to the new version without any downtime. It also allows you to catch issues with your new version while it serving a fraction of incoming requests.

Related

What's the process of updating a NodeJS running in production?

I am working on a webapp that will be published in production and then updated on regular basis as features and bug fixes are coming.
I run it like node app.js which loads config, connects to database, starts web server.
I wonder, what's the process of updating the app when I have next version?
I suppose, I have to kill the process and start after update and deploy? It means, that there will be some downtime?
Should I collect stats on the least use during the week/month and apply the update during that period? Or should I start the current version on another machine, redirect all requests to it and update the main one, then switch back?
I think the second approach is better.
The first one won't prevent downtime, it will just make sure it impacts the least number of users, while the second one creates no down-time at all.
Moreover, I think you should keep the old version running in the other machine for some time in case you will find out the new version must be reverted due to whatever reason. In that case you will just have to redirect traffic to the old node without any downtime.
Also, if you're setting up production environment I would recommend that instead of just running your process with "node" command, you will use something like forever or pm2 in order to do automatic restarts and some other advanced features.

Update nodejs in production with zero downtime

There are several packages that can update node js app in production mode with zero downtime (or graceful reload), for example pm2.
However, is it possible to update node js itself, for example from the LTS version 4.x to the new version 6.x in production with zero downtime?
Updating production with zero downtime can be done with whatever you want to update as long as it has redundancy. We do it every day at work.
Upgrading node is routine as long as installing node is part of your deployment process, using for example nvm
You need several servers of course.
Prerequisite : Suppose your code is version 1.0, upgrade node.js in dev, test your code, persist the required node version (package.json, .nvmrc or whatever your install script need) and bump it to 1.1 Extra check that your server OS has node 6 requirements. (for ex. Centos 6 can't install node 4)
Generic zero-downtime rolling-deploy procedure, supposing you have 4 servers in a farm :
remove the server 1 from the farm. If you use persistent
connections (websocket), signal existing clients of this
server to reconnect (to servers 2, 3 and 4)
deploy version 1.1 to the new server. This should include the reinstallation of node (ex. nvm install) Connect directly to it
and check everything is ok.
do the same with server 2 (remove from farm, signal clients, deploy new version) so we don't have a single point of failure. Your app is still being served by servers 3 and 4
Put back servers 1 and 2 in the farm and remove 3 and 4. Signal 3/4 clients to reconnect if needed.
upgrade servers 3 and 4 and put them back in the farm
Done. All the servers are upgraded and the clients didn't notice anything if your app is well coded.
To do zero-downtime you need to make sure someone is handling user requests all the time, so at the exact moment you kill the old version's process, new one has to already be bound to the same port with the same app.
So the most important part of zero-downtime in this case is load balancing. For that, you should have a HAproxy, nginx etc. installed in front of your app with multiple node processes, so that you can kill and replace each one separately.
In general, running one node process for all your production is not a good idea, so if you don't have any load balancer do install one even at a cost of some downtime. Then put at least two separate servers (or containers) behind it and run your app with pm2 on it.
Also, notice that even with redundancy your app will interrupt clients if you just restart one node - eg. if someone is uploading a file while this happens.
HAproxy has a feature where you can disable one of the backends, but wait till all the http connections to it finish in a natural way while haproxy is not sending any new traffic to it. That's how you make sure no user is interrupted.
Example configuration here: https://blog.deimos.fr/2016/01/21/zero-downtime-upgrade-with-ansible-and-haproxy/
Apart of rolling update mentioned here, there is another simple & obvious technique to update the app with zero downtime, called blue/green deployment.
You just spin up entire copy of your existing infra let's say blue, perform required updates there. When new blue infra is ready & tested - switch traffic to it and make old green infra idle.
This way "if something went wrong" you always have safe option to roll back to old infra.
When you sure that new blue worked OK under the load - just remove old green version.
I would suggest this way for complex operations, instead of touching live production.
More: https://cloudnative.io/docs/blue-green-deployment/

Docker: Best way to handle security updates of packages from apt-get inside docker containers

On my current server i use unattended-upgrades to automatically handle security updates.
But i'm wondering what people would suggest for working inside docker containers.
I have several docker containers running for each service of my app.
Should i have the unattended-upgrades setup in each? Or maybe upgrade them locally and push the upgraded images up? Any other ideas?
Does anyone have any experience with this in production maybe?
I do updates automatically as you did (before). I currently have Stage containers and nothing in Prod, yet. But there is no harm done applying updates to each container: some redundant networking activity, perhaps, if you have multiple containers based in the same image, but harmless otherwise.
Rebuilding a container strikes me as unnecessarily time consuming and involves a more complex process.
WRT Time:
The time to rebuild is added to the time needed to update so it is 'extra' time in that sense. And if you have start-up processes for your container, those have to be repeated.
WRT Complexity:
On the one hand you are simply running updates with apt. On the other you are basically acting as an integration server: the more steps, the more to go wrong.
Also, the updates do not create a 'golden image' since it is easily repeatable.
And finally, since the kernel is not ever actually updated, you would not ever need to restart the container.
I would rebuild the container. They are usually oriented to run one app, and may have little sense to update the supporting filesystem and all the included but not used/exposed apps there.
Having the data in a separate volume let you have a script that rebuilds the container and restarts it. It would have the advantage that loading another container from that image or pushing through a repository to another server would have all the fixes applied.

Meanjs hotswap deployment

I'm trying to deploy my MEANjs application into production...
So far I've use jenkins, git,rsync, etc. to copy the project to the remote server.
and in the final step I just have to call
stop myMeanjsApp
Replace the folder with the new version of the application
Call start myMeanjsApp
but that would mean a downtime which I'm trying to avoid so
1. how can I avoid this?
2. Are there any good practice work-flows for this ?
I've seen this but I'm not sure if its the way to go
or is there any other simple way of doing this?
Typically a large scale web application is upgraded by creating new virtual machines running an upgraded version of the software. The new virtual machines are then added to the load balancer (manually or automatically). Then the virtual machines running the old version are removed from the load balancer pool and when all in-progress requests to the old vms are done, the vms can be destroyed. E.g. AWS features like ELB and auto scaling groups makes this an attractive way to upgrade software.
You could do the same even if you have a single server by starting the new version into a different port.
The naught npm module is fair approach if you must replace the code in-place.
For some applications it might be an option to simply stop accepting new connections and restart with the new version when the last connection is done.
And for some applications you can just kill the old version and start new version at any time. It all depends on your requirements and environment.

Deploying updates to production node.js code

This may be a basic question, but how do I go about effeciently deploying updates to currently running node.js code?
I'm coming from a PHP, JavaScript (client-side) background, where I can just overwrite files when they need updating and the changes are instantly available on the produciton site.
But in node.js I have to overwrite the existing files, then shut-down and the re-launch the application. Should I be worried by potential downtime in this? To me it seems like a more risky approach than the PHP (scripting) way. Unless I have a server cluster, where I can take down one server at a time for updates.
What kind of strategies are available for this?
In my case it's pretty much:
svn up; monit restart node
This Node server is acting as a comet server with long polling clients, so clients just reconnect like they normally would. The first thing the Node server does is grab the current state info from the database, so everything is running smoothly in no time.
I don't think this is really any riskier than doing an svn up to update a bunch of PHP files. If anything it's a little bit safer. When you're updating a big php project, there's a chance (if it's a high traffic site it's basically a 100% chance) that you could be getting requests over the web server while you're still updating. This means that you would be running updated and out-of-date code in the same request. At least with the Node approach, you can update everything and restart the Node server and know that all your code is up to date.
I wouldn't worry too much about downtime, you should be able to keep this so short that chances are no one will notice (kill the process and re-launch it in a bash script or something if you want to keep it to a fraction of a second).
Of more concern however is that many Node applications keep a lot of state information in memory which you're going to lose when you restart it. For example if you were running a chat application it might not remember who users were talking to or what channels/rooms they were in. Dealing with this is more of a design issue though, and very application specific.
If your node.js application 'can't skip a beat' meaning it is under continuous bombardment of incoming requests, you just simply cant afford that downtime of a quick restart (even with nodemon). I think in some cases you simply want a seamless restart of your node.js apps.
To do this I use naught: https://github.com/superjoe30/naught
Zero downtime deployment for your Node.js server using builtin cluster API
Some of the cloud hosting providers Node.js (like NodeJitsu or Windows Azure) keep both versions of your site on disk on separate directories and just redirect the traffic from one version to the new version once the new version has been fully deployed.
This is usually a built-in feature of Platform as a Service (PaaS) providers. However, if you are managing your servers you'll need to build something to allow for traffic to go from one version to the next once the new one has been fully deployed.
An advantage of this approach is that then rollbacks are easy since the previous version remains on the site intact.

Resources