nodeJS/GitlabCI: How to serve decentralized productive application - node.js

I'm developing a nodeJS application using nextJS and an expressJS application. And I'm using an own gitlab instance for managing the git repository.
But the current application should not be deployed to a webserver at the end, but I need to create decentralized productive application. To make it a bit clearer:
Developing the application locally
Push application to my remote server
My customers should be able to get the productive app code from my remove server
Customers will run the application on there local environment - should be able to pull new versions from the remove server
So the application itself won't run on my remote server, but on the local server of the customers.
Normaly I would use my CI to test and build the application (which is be done by npm run build). Then I build an docker image which I use to run the application on my server. But all that is normaly working on the same server.
In this case I need to build the application and serve it to the customers / the customers should be able to pull the productive code. How can this be done.
Maybe I lose sight of the wood for the trees... and that's why I'm asking for help/hints.

There are a number of ways you can do this and a number of tools you can use as well. You probably want a pipeline similar to the following.
Code is developed locally, committed, and pushed to the self-hosted gitlab.
GitLab CI, (or any other CI configured) will then run CI of your code.
The final step of the CI is to create a "bundle" of your application. This is probably a .zip or similar and this will be pushed to a remote storage location. It is also possible to ensure that this is done only when pushing to specific branches (such as master).
You can use a number of things as your remote storage location, such as some sort of AWS S3 bucket, or something more complex such as Nexus (there are many free alternatives).
You would then want to give your customers access to either this storage location (if you're using something like S3, or Digital Ocean Block Storage, etc), or access to your distribution repository (such as Nexus).
You should be able to generate some sort of SSH key that you can put on your GitLabCI server and use to publish to these places. It should then be a simple case of making a HTTP call to upload a file to the relevant source. This would often be called when everything has been successful, and only for specific branches. For example if all your tests pass and you're on the master branch, zip up all your code and make a HTTP call to push the new zip file to AWS S3 which your customers have access to.
For further ideas, you could make your storage / distribution location into an FTP server if you wanted to, or a local network drive depending on what your needs are for distribution. If you're just dealing with docker for your customers, then I'd suggest building a Docker image and self-hosting a docker registry. Push to that registry after you've built the image, and that would be the end of your CI run.
As a side note, if your customers are using docker you could create a docker image either push it to a registry or export it as a .tar and upload it to a file storage location (S3 for example). This would make things simple for your customers and ensure you control the image creation step (if that's something you want to manage).
The gitlab ci docs might help you with the specifics of uploading artifacts to various locations.

Related

What is the purpose of Docker?

So in my head, Docker is a container management system that allows you to build an application in a unified way so you don't need to worry about version control, client environment configuration and so on.
However, there is some concept that I am clearly missing:
In my head, Docker basically wraps your whole program in a container to be shipped easily to clients and anybody who wants to use your product. And from there I can just tell clients to install so-and-so to set up the whole system in their own system. However, digging into Docker, I don't understand how pulling and pushing images into DockerHub helps that use case as well as not providing an executable to execute DockerImage in a click.
DockerHub images take so many steps to unpack and edit. I was assuming that those templates on DockerHub exists for us to pull and edit the template for our own use cases, but that does not seem to be the case because the steps to unpack an image is much more than I imagined, and the use case seems to be more of "Download and use image, not for editing".
Surely I am missing something about Docker. What is the purpose of pushing and pulling images on DockerHub? How does that fit into the use case of containerizing my software to be executed by clients? Is the function of DockerHub images just to be pulled to be ran and not edited?
It's so hard for me to wrap my head around this because I'm assuming Docker is for containerizing my application to be easily executable by clients who wants to install my system.
To further explain this answer I would even say that docker allows you to have a development environment tied to your application that is the same for all your developers.
You would have your git repo with your app code, and a docker container with all that is needed to run the application.
This way, all your developers are using the same version of software and that docker container(s) should replicate the production environment (you can even deploy with it, that's another use for it) but with this there's no more the "it works on my machine" problem. Because everyone is working on the same environment.
In my case all our projects have a docker-compose structure associated with them so that each project always have their server requirements. And if one developer needs to add a new extension, he can just add it to the docker config files and all developer will receive the same extension once they update to the latest release.
I would say there are two uses to having images on DockerHub.
The first is that some images are extremely useful as-is. Pulling a redis/mariadb image saves you the trouble of setting it and configuring it yourself.
The second is that you can think of a docker image as a layered item: assume your application is a PHP server. You can (and will have to) create an image for your app source code. BUT the container will need PHP to run your source code!
This is why you have a FROM keyword in a Dockerfile, so that you can define a "starting layer". In the case of a PHP server you'd write FROM php:latest, and docker would pull a PHP image for your server to use from DockerHub.
Without using Dockerhub, you'd have make your image from scratch, and therefore to bundle everything in your image, some operating system information, PHP, your code, etc. Having ready-to-use images to start from makes the image you're building much lighter.

Deploying docker images

I have a nodejs server app and a separate reacts client app.
I have created docker images for both and a docker compose at the top level to build and run both
I'm struggling to understand how I can deploy/host these somewhere?
Do I deploy both separate images to the docker register? Or is this a way of hosting this on it's own as an entire docker container?
If you've already built the docker images on local, you can use DockerHub for hosting the docker images. If you're using Github Actions this gist script can be helpful.
Docker Registry is storage for built images. Think it as location for compiled "binaries" if comparing regular software.
Regularly, you might have some kind of CI for your source code, and when you trigger it for example by committing into 'master' branch, new image is built on the CI. It can push it into registry for long term storing, or push it directly to your hosting server (or registry in your server).
You can configure your docker-compose to pull latest images from private registry, when you just rerun it in your server.
Basically, hosting happens when you just run docker-compose up in some server, if you have done required configurations. It really depends where you are going to host them.
Maybe helpful:
https://www.docker.com/blog/how-to-deploy-on-remote-docker-hosts-with-docker-compose/
https://medium.com/#stoyanov.veseline/pushing-docker-images-to-a-private-registry-with-docker-compose-d2797097751

Good practices for pulling from git repo into production server

I have a DigitalOcean VPS with ubuntu and a few laravel projects, for my projects initial setup I do a git clone to create a folder with my application files from my online repository.
I do all development work in my local machine, where I have two branches (master and develop), what I do is merge develop into my local master, then I push from master into my local repository.
Nw back into my production server, when I want to add all the changes added into production I do a git pull from origin, so far this has resulted into git telling me to stash my changes, why is this?
What would be the best approach to pull changes into production server? take in mind that my production server has no working directory perse, all I do in my VPS is either clone or push upgrades into production.
You can take a look at the CI/CD (continuous integration / continuous delivery) systems. GitLab for example offer free-to-use plan for small teams.
You can create a pipeline with a manual deploy step (you have to press a button after the code is merged to the master branch) and use whatever tool you like to deploy your code (scp, rsync, ftp, sftp etc.).
And the biggest benefit is that you can have multiple intermediate steps (even for the working branches) where you can run unit tests which would prevent you to upload failing builds (whenever you merge non-working code)
For the first problem, do a git status on production to see which files that git sees as changed or added and consider adding them to your .gitignore file (which itself should be a part of your repo). Laravel generally has good defaults for these, but you might have added things or deviated from them in the process of upgrading Laravel.
For the deployment, the best practice is to have something that is consistent, reproducible, loggable, and revertable. For this, I would recommend choosing a deployment utility. These usually do pretty much the same thing:
You define deployment parameters in code, which you can commit as a part of your repo (not passwords, of course, but things like the server name, deploy path, and deploy tasks).
You initiate a deploy directly from your local computer.
The script/utility SSH's into your target server and pulls the latest code from the remote git repo (authorized via SSH key forwarded into the server) into a 'release' folder.
The script does any additional tasks you define (composer install, npm run prod, systemctl restart php-fpm, soft-linking shared files like .env, and etc.)
The script soft-links the document root to your new 'release' folder, which results in an essentially zero-downtime deployment. If any of the previous steps fail, or you find a bug in the latest release, you just soft-link to the previous release folder and your site still works.
Here are some solutions you can check out that all do this sort of thing:
Laravel Envoyer: A 1st-party (paid) service that allows you to deploy via a web-based GUI.
Laravel Envoy: A 1st-party (free) package that allows you to connect to your prod server and script deployment tasks. It's very bare-bones in that you have to write all of the commands yourself, but some may prefer that.
Capistrano: This is (free) a tried-and-tested popular ruby-based deployment utility.
Deployer: The (free) PHP equivalent of Capistrano. Easier to use, has a lot of built-in tasks (including a Laravel one), and doesn't require ruby.
Using these utilities is not necessarily exclusive of doing CI/CD if you want to go that route. You can use these tools to define the CD step in your pipeline while still doing other steps beforehand.

How does deployment on remote servers work?

I'm a bit new to version control and deployment environments and I've come to a halt in my learning about the matter: how do deployment environments work if developers can't work on the same local machine and are forced to always work on a remote server?
How should the flow of the deployment environments be set up according to best practices?
For this example I considered three deployment environments: development, staging and production; and three storage environments: local, repository server and final server.
This is the flow chart I came up with but I have no idea if it's right or how to properly implement it:
PS. I was thinking the staging tests on the server could have restricted access through login or ip checks, in case you were wondering.
I can give you (according to my experience) a good and straightforwarfd practice, this is not the only approach as there is not a unique standard on how to work on all projects:
Use a distributed version control system (like git/github):
Make a private/public repository to handle your project
local Development:
Developers will clone the project from your repo and contribute to it, it is recommended that each one work on a branch, and create a new branch for each new feature
Within your team, there is one responsible for merging the branches that are ready with the master branch
I Strongly suggest working on a Virtual Machine during Development:
To isolate the dev environment from the host machine and deal with dependencies
To have a Virtual Machine identic to the remote production server
Easy to reset, remove, reproduce
...
I suggest using VirtualBox for VM provider and Vagrant for provisioning
I suggest that your project folder be a shared folder between your host machine and your VM, so, you will write your source codes on your host OS using the editor you love, and at the same time this code exists and runs inside your VM, is in't that amazingly awesome ?!
If you are working with python I also strongly recommend using virtual environments (like virtualenv or anaconda) to isolate and manage inner dependencies
Then each developer after writing some source code, he can commit and push his changes to the repository
I suggest using project automation setup tools like (fabric/fabtools for python):
Making a script or something that with one click or some commands, reproduces all the environment and all the dependencies and everything needed by the project to be up and running, so all developers backend, frontend, designers... no matter their knowlege nor their host machine types can get the project running very merely. I also suggest doing the same thing to the remote servers whether manually or with tools like (fabric/fabtools)
The script will mainly install os dependencies, then project dependencies, then cloning the project repo from your Version Control, and to do so, you need to Give the remote servers (testing, staging, and production) access to the Repository: add ssh public keys of each server to the keys in your Version Control system (or use agent forwarding with fabric)
Remote servers:
You will need at least a production server which makes your project accessible to the end-users
it is recommended that you also have a testing and staging servers (I suppose that you know the purpose of each one)
Deployment flow: Local-Repo-Remote server, how it works ?:
Give the remote servers (testing, staging, and production) access to the Repository: add ssh public keys of each server to the keys in your Version Control system (or user agent forwarding with fabric)
The developer writes the code on his machine
Eventually writes tests for his code and runs them locally (and on the testing server)
The developer commits and pushes his code to the branch he is using to the remote Repository
Deployment:
5.1 If you would like to deploy a feature branch to testing or staging:
ssh access to the server and then cd to the project folder (cloned from repo manually or by automation script)
git checkout <the branch used>
git pull origin <the branch used>
5.2 If you would like to deploy to production:
Make a pull request and after the pull request gets validated by the manager and merged with master branch
ssh access to the server and then cd to the project folder (cloned from repo manually or by automation script)
git checkout master # not needed coz it should always be on the master
git pull origin master
I suggest writing a script like with fabric/fabtools or use tools like Jenkins to automate the deployment task.
VoilĂ ! Deployment is done!
This is a bit simplified approach, there are still a bunch of other recommended and best practice tools and tasks.

How can I solve the deployment/updating of dockerized app on my VPS?

Not easy to make good title for this question so if someone have better idea please edit.
That's what I have:
VPS (KVM)
Docker
Nginx-proxy so all docker containers supposed to be exposed are automatically exposed to appropriate domain.
Some apps like Wordpress are just using container with connected volumes which are accesible by FTP so this is not an issue to manage them/update stuff etc.
I have SailsJS app (NodeJS) which I have to dockerize. It will be kept updated quite often.
I will have some apps written in C#(ASP.NET) / Java (Spring) with similar scenario as in point 5.
Both 5 and 6 source code is stored on BitBucket but can be changed if it would be better to have self hosted git server to solve issues.
What I am looking for is to have automated process which will build the docker image when I do commit and make sure that docker will pull the new image and restart container with new content. I do not want to use DockerHub as there is only 1 private repository so it will not work for long term.
I thought I can do it with Jenkins somehow but have no idea how...
You can setup private GitLab server.
It provides THREE necessary things - Git repository (managed as admin by your own), completely private Docker registry (so you can privately store your own docker images) , and own CI - complete and sufficient to do what you request, integrated seamlessly and working with former two.
You would setup GitLab runner so when you do commit image being rebuilt and pushed to component-specific registry, and there are hooks and environments which allow you to set up back connection.

Resources