Implementing a CI/Deployment Pipeline for a Node app - node.js

I will shortly be in the process of rewriting a node app with the intention of
implementing Continuous Integration and TDD.
I also want to design and set up a deployment pipeline for development, staging, and production.
Currently I'm using Shipit to push changes to different instances that have pre-configured environments. I've heard about deploying Docker containers with the needed environments, and I'd like to learn more about that.
I'm looking at TravisCI and for automated testing/builds, and from my understanding, one can push the Docker image to a registry after the build succeeds.
I'm also learning about scaling, and looking at a design for production that incorporates Google Cloud servers/services serving 3 clustered versions of the node app, a Redis cluster, and 2 PostgreSQL nodes, which each service being behind a load balancer.
I've heard of Kubernetes being used to manage and deploy containerized applications, but I'm curious on how it all fits together.
In my head I think that it would seem like the process would be as follows:
commit changes on dev machine - push to repository.
TravisCI builds and runs tests, (what about migrations and pushing changes to the postgreSQL service?), pushes to a Google Cloud Container Registry.
Log into the Google Container Engine and run the app with Kubernetes.
What about the Redis Cluster? The PostgreSQL nodes?
I apologize in advance if this question is lacking in clarity and knowledge, but I'm trying to learn more before I move along.

Have you considered Google Cloud Container Builder? It's very easy to set up a trigger from your Github repository, which would start a new build on changes (branch or tag).
As part of the build, you can push the new image to GCR.
And you could also deploy to Kubernetes as part of the same build.

Related

How many agents should I have?

I trying to build a branch-based GitOps declarative infrastructure for Kubernetes. I plan to create clusters on a cloud provider with crossplane, and those clusters will be stored in Gitlab. However, as I start building, I seem to be running into gitlab-agent sprawl.
Every application I will be deploying to each of my environments is stored in a separate git repo, and I'm wondering if I need a separate agent for each repo and environment. For example, I have my three clusters prod, stage, and dev, and my three apps, API, kafka, and DB. I've started with three agents per repo (gitlab-agent-api-prod, gitlab-agent-kafka-stage, ...), Which seems a bit excessive. Do I really need 9 agents?
Additionally, I now have to install as many agents as I have apps onto each of my clusters, which already eats up significant resources. I'd imagine I can get away with one gitlab agent per cluster, I am just not seeing how that is done. Any help would be appreciated!
PS: if anyone has a guide on how to automatically add gitlab agents to new clusters created with crossplane, I'm all ears. Thanks!

How to upgrade a NodeJS Docker container?

I have a NodeJS image based on the official node Docker image running in a production environment.
How to keep the NodeJS server up-to-date?
How do I know when or how often to rebuild and redeploy the docker image? (I'd like to keep it always up to date)
How do I keep the npm packages inside of the Docker image up to date?
You can use jenkins to schedule job that create nodejs image on desired interval.
Best way to handle the package and updates for docker images is to create separate tags with all updates. Separate tags for all new updates enable you to rollback in case of any backward compatibility issue.
With this new image create your application image and always run test suite if you want to achieve continuous delivery.
[UPDATE] - Based on comments from OP
To get the newest images from Docker, and then deploy them through the following process, you can use the DockerHub API (Based on the Registry HTTP API) to query for tags of an image. Then find the image you use (Alpine, Slim, Whatever) and take it's most recent tag. After this, run through your test pipeline and register that tag as a deploy candidate
TOKEN=//curl https://hub.docker.com/v2/users/login with credentials
REPO="node"
USERNAME="MyDockerHubUsername"
TAGS=$(curl -H "Authorization: JWT ${TOKEN}" https://hub.docker.com/v2/repositories/${USERNAME}/${REPO}/tags/)
Your question is deceptively simple. In reality, Keep a production image up-to-date requires a lot more than just updating the image on some interval. To achieve true CI/CD of your image you'll need to run a series of steps each time you want to update.
A successful pipeline (Jenkins, Bamboo, CircleCi, CodePipeline, etc) will incorporate all of these steps. And will, ideally, be ran on each commit:
Static Analysis
First, analyze your code using a linter (eslint) and some code coverage metric. I won't say what is considered acceptable level of coverage as that is largely opinion based, but at least some amount of coverage should be expected.
Test (Unit)
Use something like Karma/Mocha/Cucumber to run unit tests on your code.
Build
Now you can build your Docker image. I prefer tools like Hashicorp's Packer for building images.
Since I assume you're running a node server (Express or something like it) from within the container, you may also want to spin up the container and run some local acceptance testing after this stage.
Register
After you've accepted local testing of the container, register the image with whichever service you use (ECR, Dockerhub, Nexus) and tag it in some meaningful way.
Deploy
Now that you have a functioning container, you'll need to deploy it to your orchestration environment. This might be Kubernetes, Docker Swarm, AWS ECS or whatever. It's important that you don't yet serve traffic to this container, however.
Test (Integration)
With the container running in a meaningful test environment (nonprod, stage, test, whatever) you can now run integration tests against it. These would check to make sure it can connect with data tier, or would look for a large occurrence of 500/400 errors.
Don't forget - Security should always be a part of your testing also. This is a good place for that
Switch
Now that you've tested in nonprod, you can either deploy to the production env or switch routing to the standing containers which you just tested against. Here you should decide if you'll use green/blue or A/B deployment. If blue/green then start routing all traffic to the new container. If A/B, set up a routing policy based on some ratio. Which ever you use, make sure you have an idea of what failure rate is considered acceptable. Monitor the new deployment for any failures (500 error codes or whatever you think is important) and make sure you have the ability to quickly roll back to the old containers if something goes wrong.
Acceptance
After enough time has passed without defects, you can accept the new container as a stable candidate. Retag the image, or save the image tag somewhere with the denotation that it is "stable" and make that the new defacto image for launching.
Frequency
Now to answer "How Often". Frequency is a side effect of good iterative development. If your code changes are limited in size and scope, then you should feel very confident in launching whenever code passes tests. Thus, with strong DevOps practices, you'll be able to deploy a new image whenever code is committed to the repo. This might be once, twice or fifty times a day. The number eventually becomes arbitrary.
Keep NPM Packages Up To Date
This'll depend on what packages you're using. For public packages, you might want to constrain to a version. Then create pipelines that test certain releases of those packages in a sandbox environment before allowing them into your environment.
For private packages, make sure you have a pipeline for each of those also. The pipeline should run analysis, testing and other important tasks before registering new code with npm or your private repos (Nexus, for example)

Deploy a nodejs server for web CRM for production

I'm looking for any method or tutorial to deploy on local server/computer a nodejs express app. this will be for production environment. All I read about solutions like zenit now, localtunnel, forever, pm2 and similars is that they aren't recomended for production environments. The idea is to have a public web without hosting. I eed that the method allows to maintain more than one node/web active at the same time.
When people say a component is not recommended for production, it does not mean that it is not stable. Most of the times it means that it is not a full blow solution that considers all the aspects of a production deployment:
scalability
fail-over
security
configurability
automation
etc.
If you are trying to build a solution that has precise requirements (requests per seconds, media streaming, etc.) you should post in your question as well to make it concrete. If this is not the case, you just have to install a basic setup that runs your configuration and fix bottlenecks as they appear. Don't try to build a theoretically correct solution now.
A couple of examples:
A classical setup (goes well with Do-It-Yourself deployments)
install Git + (Node.js and NPM) + (Forever or equivalent) + your database (e.g. MongoDB) + (NGINX or HAProxy) on your favourite/accepted Linux distribution
clone each Node.js app in its own directory
install cronjobs for basic monitoring and maintenance
add scripts to dynamically remove/add NGINX web server configurations based on deleted/added Node.js apps
A more modern setup (goes well with AWS/GCE deployments but also possible locally with tools like skaffold)
install a Kubernetes cluster on a couple of machines
prepare a base Docker container image that matches all your Node.js applications
if required, add a Dockerfile to each Node.js application to build one Docker image per application based on the base Docker container image
add a new deployment for each of your Node.js application
Kubernetes will do for you the "keep-alive"
fill-in the plumbing between your server network (DNS, IP, ports) and the IP's provided to you by Kubernetes (NGINX or HAProxy would also fill in this hole)

Replacing database connection strings in the Docker image

I'm having hard times with the application's release process. The app is being developed in .NET Core and uses 'appsettings.json' that holds connection string to a database. The app should be deployed to Kubernetes cluster in Azure. We have a build and release processes in Azure DevOps so the process is automated, although the problem belongs to a necessity to deploy the same to multiple environments (DEV/QA/UAT) and every env is using its own database. When we build Docker image, the 'appsettings.json' that holds a connection string is being baked-in to the image. The next step pushes the image to a container repository which Release process then uses to deploy the image to a cluster (the steps are classics).
Replacing or putting the connection parameters into the variables on the build step is not a big deal. However, this is a Release process that controls the deployment to multiple environments. I don't see how I can substitute the connection string to a database in the Release pipeline... or better to say, how to deploy to three different environments with database connection string properly set for each of them.
Please suggest how it can be achieved. The only option I came up with is having three separate build pipelines for every env which doesn't look pretty. The entire idea behind Release is that you can manage the approval process before rolling out the changes to the next environment.
Decided to proceed with Kubernetes secrets. Found a good article around this issue here: https://strive2code.net/post/2018/12/07/devops-friday-build-a-deployment-pipeline-using-k8s-secrets

Deploy node.js in production

What are the best practices for deploying a nodejs application in production?
I would like to know how deploy for production Api's nodejs is being done today, today my application is in docker and running locally.
I wonder if I should use a Nginx inside the container and deploy my server on it or just upload my image node that is already running today.
*I need load balance
There are few main types of deployment that are popular today.
Using platform as a service like Heroku
Using a VPS like AWS, Digital Ocean etc.
Using a dedicated server
This list is in the order of growing difficulty and control. So it's easiest with PaaS but you get more control with a dedicated server - thought it gets significantly more difficult, especially when you need to scale out and build clusters.
See this answer for more details on how to install Node on a VPS or a dedicated server:
how to run node js on dedicated server?
I can only add from experience on AWS using a NAT Gateway which is a dedicated Node server with a MongoDB server behind the gateway. (Obviously this is a scalable system and project.)
With or without Docker, you need to control the production environment. This means clearly defining which NPM libraries you will need for production, how you handle environment variables and clusters for cores.
I would suggest, very strongly, using a tool like PM2 to handle clusters, server shutdowns and restarts and logs. (Workers & slaves also if you need them and code for them).
This list can go on and on, but keep in mind this is only from an AWS perspective. Setting up a Gateway correctly on AWS is also not an easy process. Be prepared for some gotcha's along the way.

Resources