Docker instance port Management - node.js

I have different docker instances and I need to start node.js processes in each of these instances. For this to happen, is it needed that each of them start on different port numbers? How does the container manage that and is there a docker management system for it? I want the client to know which port has the instance initiated the node.js process. How can this be automated?

this problem is called "orchestration". I kind of think Docker is a bit overblown because it actually doesn't solve this problem.
Kubernetes is an open source tool. Tutum is an online service. Docker has started a tool but it's not done.
Honestly, it's a bit of a cluster-show at the moment. If you're not hosting 20+ instances, I'd recommend building bash scripts.
Currently, I use a bespoke solution made from DigitalOcean, Dokku, and bash scripting. This gives me the flexibility of a self-hosted Heroku like environment that is very dev friendly.
Dokku let's you deploy docker apps using a 'git push'. It reads files in your repo to build the image.

You don't have to start the applications inside docker on different ports. You can map any port (for example, port 80) inside the docker container to any port on your host machine.
There is no rule about how to use this to your benefit.
If your clients all have ids say in the 1-10000 range, you can map the docker container's port 80 to "client_id + 20000".

Related

Running NodeJS server in production

I have a react + node app which I need to deploy. I am using nginx to serve my front end but I am not sure what to use to keep my nodejs server running in production.
The project is hosted on a windows VM. I cannot use pm2 due to license issues. I have no idea if running the server using nodemon in production is good or not. I have never deployed an app in production, hence I have no idea about appropriate methods.
You may consider forever or supervisor.
Check this blog post on the same.
You can also use docker. You can create multiple docker containers that will run your node server. Now at the nginx level at your host machine you can do load balancing configuration which will route the traffic equally to different docker node containers this will improve your availability and scalability, In heavy traffic you just need to increase the number of docker node containers as and when required. I guess initially 2 containers will be enough to handle traffic (depends on your use case though).
Note:- You can also use forever or supervisor as suggested by #Rajesh Gupta inside your docker containers for running node server. We use PM2 for that.
If you have a database then you can create a separate docker container for the database and map it to a volume in your host machine.
You can learn about docker from here.
Also you can read about load balancing in nginx from here.
Further more to improve your availability you can add a caching layer in between nginx and docker containers. Varnish is the best caching service i have used till date.
PS:- We use a similar but more advanced architecture to run our Ecommerce application that generates 5-10k orders daily. So this is a tested approach with 0 downtime.
Try to dockerize the whole app including the db, caching server (if any) etc.
Here are some examples why:
You can launch a fully capable development environment on any
computer supporting Docker; you don't have to install libraries,
dependencies, download packages, mess with config files etc.
The working environment of the application remains consistent across
the whole workflow. This means the app runs exactly the same for
developer, tester, and client, be it on development, staging or
production server. In short, Docker is the counter-measure for the
age-old response in the software development: "Strange, it works for
me!"
Every application requires a specific working environment: pre-installed applications, dependencies, data bases, everything in specific version. Docker containers allow you to create such environments. Contrary to VM, however, the container doesn't hold the whole operating system—just applications, dependencies, and configuration. This makes Docker containers much lighter and faster than regular VM's.

Using existing Ansible roles to create a custom Docker image

I currently use Ansible to manage and deploy a fleet of servers.
I wish to start using Docker for some applications and would like to build a Docker image using the same scripts we use to configure on non Dockerized hosts.
For example we have an Ansible role that builds Nginx with 3rd party modules, would like to use the same role to build a Docker image with the custom Nginx.
Any ideas how I would get this done?
There is the "Ansible Container" project, https://www.ansible.com/integrations/containers/ansible-container. That page points also to the github repo.
It is not clear how well maintained it is, but their reasoning and approach makes sense.
Consider that you might have some adjustments to do regarding two aspects:
a container should do only one thing (microservice)
how to pass configuration to the container at runtime (Docker has some guidelines, such as environmental variables if possible or mounting a volume with the configuration files)
That's a perfect example where the docker-systemctl-replacement script should be used.
It has been developed to allow ansible scripts to target both virtual machines and docker containers. It had been developed when distros did switch to systemd which was hard to enable for containers. When overwriting /usr/bin/systemctl then the docker container will then look good enough for ansible that all the old scripts will continue to run, installing rpm/deb, and having 'service:'s started and enabled.

Ship docker image as an OVF

I have developed an application and am using docker to build it. I would like to ship it as a VMware OVF package. What are my options? How do I ship it so customer can deploy it in their VMware environment?
Also I am using a base Ubuntu image and installed NodeJS, MongoDB and other dependencies on it. But I would like to configure my NodeJS based application and MongoDB database as a service within the package I intend to ship. I know how to configure these as a service using init.d on a normal VM. How do I go about this in Docker? Should I have my init.d files in my application folder and copy them over to Docker container during build? Or are there better ways?
Appreciate any advise.
Update:
The reason I ask this question is - My target users need not know docker necessarily. The application should be easy to deploy for someone who do not have docker experience. With all services in a single VM makes it easy to troubleshoot issues. As in, all log files will be saved in the /var/log directory for different services and we can see status of all different services at once. Rather than the user having to look into each docker service. And probably troubleshooting issue with docker itself.
But at the same time I feel it convenient to build the application the docker way.
VMware vApps usually made of multiple VMs running to provide a service. They may have start up dependencies and etc.
Now Using docker you can have those VMs as containers running on a single docker host VM. So a single VM removes the need for vAPP.
On the other hand containerizing philosophy requires us to use Microservices. short explanation in your case, putting each service in a separate container. Then write up a docker compose file to bring the containers up and put it in start up. After that you can make an OVF of your docker host VM and ship it.
A better way in my opinion is to create docker images, put them in your repository and let the customers pull them. Then provide docker compose file for them.

Do I first need docker environment before starting my project?

I am going to work with Node.js and PostgreSQL on Linux. I read many hours about how docker actually works. Still I am not sure that is docker environment needed before starting my project or I can use docker after completion of the project?
Lets first understand what docker is and how you can use it in your project.
Docker have three core concepts:
1) Docker engine : a lightweight runtime and robust tooling that builds and runs your Docker containers.
2) Docker image : a carbon copy of your project environment including all environment dependencies like base operating system, host entries, environment variables, databases, web/application servers. In your case, Linux distribution of your choice, node.js and required modules, PostreSQL and it's configuration.
3) docker container : can be visualized as an virtual Linux server running your project. Each time you use docker run, a new container is launched from the docker image.
You can visualize a docker-environment as an lightweight virtual machine where you can run your project without any external interference(host entries/environment variables/ RAM/ CPU) from other projects.
So as a developer, you can develop your project on your Dev machine and once it's ready to be pushed to QA/Staging you can build a docker image of your project which then can be deployed on any environment(QA/Staging/Production).
You can launch multiple container from your image on single or multiple physical servers.
You can introduce Docker whenever you want. If using multiple servers then you can create a Docker container with one server in it and the other (non-Dockerised solution) makes requests to that.
Or you could Dockerise them both.
Basically, introduce Docker when you feel the time is right.
I like to divide a large project into multiple sections - e.g. front end web sever, backend authentication server, backend API server 1, backend API server 2, etc.
As each part of the project gets completed, I Dockerise it. The other parts then use the Dockerised solution.

Run multiple servers from a docker image

I have three express servers written on nodejs. These servers are serving different purposes and hence running on different ports.
Eg: app1.js on 8000, app2.js on 5000 and app3.js on 5432.
I want to create a docker image using a docker file and run all these servers. Can we do so? If so, how can we do it? As per my knowledge we can run only one command from docker file.
The suggested mechanism by Ethan is correct for running multiple docker container at once, but does not explain why.
Just to explain a bit further, each docker container can spawn multiple processes (servers), but a docker container needs one of the processes to be in foreground, and docker the container lifecycle typically reflects the lifecycle of the foreground process.
Lot of the benefits of dockerization will be lost when you run all processes in one docker container. And hence it is recommended to have one docker container per process.
You may want to look into using Docker Compose.
Each server would have its own Dockerfile and your docker-compose.yml file would define the ports these expose and how they interact.
While it's not "recommended", sure you can. It's even documented.
Docker and Supervisord
Or you can use Runit
Lately I have been using s6
You may want to check out the phusion passenger nodejs image. You can configure it to run a single server that is serving data from multiple nodejs processes.

Resources