Configuring Node.JS container in Rancher - node.js

I am attempting to deploy my first workload with rancher.
I am trying to edit the existing default rancher workload, after getting the rancher hello world example working.
I changed the docker image to node:10 and the port to 8080. I am not sure if I am able to do this directly from rancher, or if I need to create a docker image with my user in docker hub to do this.
I would like to have a generic image, and then add some additional configuration to rancher, so I can reuse these settings for other node.js projects.
I would like a base node.js container, and then add a parameter to checkout a specific branch of a specific project whenever the container boots for example. I am planning on getting this integrated with teamcity to deploy to the rancher containers whenever teamcity detects a new commit.
Doing this in stages, I would like to get a node:10 container within rancher up and running. Can this be done by simply adding node:10 as the image and setting the default port in the add port section? If so, what is the default port to use?
I have tried the above and I haven't been able to get the container to load, I get a connection refused when I try to access.

Yes, you can have different images. There are many projects which use this pattern.
For example you can check this repo: https://github.com/rocker-org/rocker
r-devel image is based on r-base image
https://github.com/rocker-org/rocker/blob/master/r-devel/Dockerfile#L4
This functionality is not specific to Rancher. After you have your containers packaged according to your needs, you can use Rancher to run them.

Related

Is it possible to edit/alter and save the code in a docker container or, failing that connect an editable app to a docker container, and run it

I need to make changes in a React/Node app, which because of problems/errors installing dependencies via npm or yarn, can only be acquired through docker.
The docker version has the correct dependencies installed and works correctly.
Please forgive my lack of understanding about docker.
My question is: how do I go about editing/altering this app, to make the changes required for my project? As far as I know the content of a docker container is read-only. Is there a way, despite this, to access/edit the node/react files and save these changes. Or, as another possibility, can I clone the app from the github repo and then attach/run this app within the docker container, using the the dependencies which work inside the docker container?
I have Remote-Containers installed on my vscode, but haven't been able to make head or tails of how to get that to work, or how it should work.
Would be very grateful for any pointers.
The typical method would be to make your changes to your application. Then commit those changes to a source code repository from which a new docker image would be built based off your code changes.
This new image would be deployed to your servers for use.
While it is possible to alter a running container through some intricate gyrations, those changes are transient and live only while that container is running.

What is the purpose of Docker?

So in my head, Docker is a container management system that allows you to build an application in a unified way so you don't need to worry about version control, client environment configuration and so on.
However, there is some concept that I am clearly missing:
In my head, Docker basically wraps your whole program in a container to be shipped easily to clients and anybody who wants to use your product. And from there I can just tell clients to install so-and-so to set up the whole system in their own system. However, digging into Docker, I don't understand how pulling and pushing images into DockerHub helps that use case as well as not providing an executable to execute DockerImage in a click.
DockerHub images take so many steps to unpack and edit. I was assuming that those templates on DockerHub exists for us to pull and edit the template for our own use cases, but that does not seem to be the case because the steps to unpack an image is much more than I imagined, and the use case seems to be more of "Download and use image, not for editing".
Surely I am missing something about Docker. What is the purpose of pushing and pulling images on DockerHub? How does that fit into the use case of containerizing my software to be executed by clients? Is the function of DockerHub images just to be pulled to be ran and not edited?
It's so hard for me to wrap my head around this because I'm assuming Docker is for containerizing my application to be easily executable by clients who wants to install my system.
To further explain this answer I would even say that docker allows you to have a development environment tied to your application that is the same for all your developers.
You would have your git repo with your app code, and a docker container with all that is needed to run the application.
This way, all your developers are using the same version of software and that docker container(s) should replicate the production environment (you can even deploy with it, that's another use for it) but with this there's no more the "it works on my machine" problem. Because everyone is working on the same environment.
In my case all our projects have a docker-compose structure associated with them so that each project always have their server requirements. And if one developer needs to add a new extension, he can just add it to the docker config files and all developer will receive the same extension once they update to the latest release.
I would say there are two uses to having images on DockerHub.
The first is that some images are extremely useful as-is. Pulling a redis/mariadb image saves you the trouble of setting it and configuring it yourself.
The second is that you can think of a docker image as a layered item: assume your application is a PHP server. You can (and will have to) create an image for your app source code. BUT the container will need PHP to run your source code!
This is why you have a FROM keyword in a Dockerfile, so that you can define a "starting layer". In the case of a PHP server you'd write FROM php:latest, and docker would pull a PHP image for your server to use from DockerHub.
Without using Dockerhub, you'd have make your image from scratch, and therefore to bundle everything in your image, some operating system information, PHP, your code, etc. Having ready-to-use images to start from makes the image you're building much lighter.

Deploying docker images

I have a nodejs server app and a separate reacts client app.
I have created docker images for both and a docker compose at the top level to build and run both
I'm struggling to understand how I can deploy/host these somewhere?
Do I deploy both separate images to the docker register? Or is this a way of hosting this on it's own as an entire docker container?
If you've already built the docker images on local, you can use DockerHub for hosting the docker images. If you're using Github Actions this gist script can be helpful.
Docker Registry is storage for built images. Think it as location for compiled "binaries" if comparing regular software.
Regularly, you might have some kind of CI for your source code, and when you trigger it for example by committing into 'master' branch, new image is built on the CI. It can push it into registry for long term storing, or push it directly to your hosting server (or registry in your server).
You can configure your docker-compose to pull latest images from private registry, when you just rerun it in your server.
Basically, hosting happens when you just run docker-compose up in some server, if you have done required configurations. It really depends where you are going to host them.
Maybe helpful:
https://www.docker.com/blog/how-to-deploy-on-remote-docker-hosts-with-docker-compose/
https://medium.com/#stoyanov.veseline/pushing-docker-images-to-a-private-registry-with-docker-compose-d2797097751

What's the differences between the VSTS Build tasks of docker

What's the differences between the VSTS Build tasks of docker (with preview) and docker without preview?
It's said in the description that 'red' ones can be used with Docker or Azure Container Registry, is this an only difference?
Could they differentiate with docker/compose version or environment (e.g., one for windows, one for linux?)
Based on the source code of them, the difference between them is adding supported for Azure Container Registry, the Docker Registry Connection are the same.
You can setup a private build agent and add these tasks to a build definition and queue build with that build agent, and then these tasks will be downloaded to the _task folder (e.g. _work_task) and you can check it.

How can I solve the deployment/updating of dockerized app on my VPS?

Not easy to make good title for this question so if someone have better idea please edit.
That's what I have:
VPS (KVM)
Docker
Nginx-proxy so all docker containers supposed to be exposed are automatically exposed to appropriate domain.
Some apps like Wordpress are just using container with connected volumes which are accesible by FTP so this is not an issue to manage them/update stuff etc.
I have SailsJS app (NodeJS) which I have to dockerize. It will be kept updated quite often.
I will have some apps written in C#(ASP.NET) / Java (Spring) with similar scenario as in point 5.
Both 5 and 6 source code is stored on BitBucket but can be changed if it would be better to have self hosted git server to solve issues.
What I am looking for is to have automated process which will build the docker image when I do commit and make sure that docker will pull the new image and restart container with new content. I do not want to use DockerHub as there is only 1 private repository so it will not work for long term.
I thought I can do it with Jenkins somehow but have no idea how...
You can setup private GitLab server.
It provides THREE necessary things - Git repository (managed as admin by your own), completely private Docker registry (so you can privately store your own docker images) , and own CI - complete and sufficient to do what you request, integrated seamlessly and working with former two.
You would setup GitLab runner so when you do commit image being rebuilt and pushed to component-specific registry, and there are hooks and environments which allow you to set up back connection.

Resources