Run multiple servers from a docker image - node.js

I have three express servers written on nodejs. These servers are serving different purposes and hence running on different ports.
Eg: app1.js on 8000, app2.js on 5000 and app3.js on 5432.
I want to create a docker image using a docker file and run all these servers. Can we do so? If so, how can we do it? As per my knowledge we can run only one command from docker file.

The suggested mechanism by Ethan is correct for running multiple docker container at once, but does not explain why.
Just to explain a bit further, each docker container can spawn multiple processes (servers), but a docker container needs one of the processes to be in foreground, and docker the container lifecycle typically reflects the lifecycle of the foreground process.
Lot of the benefits of dockerization will be lost when you run all processes in one docker container. And hence it is recommended to have one docker container per process.

You may want to look into using Docker Compose.
Each server would have its own Dockerfile and your docker-compose.yml file would define the ports these expose and how they interact.

While it's not "recommended", sure you can. It's even documented.
Docker and Supervisord
Or you can use Runit
Lately I have been using s6

You may want to check out the phusion passenger nodejs image. You can configure it to run a single server that is serving data from multiple nodejs processes.

Related

Running an application with multiple processes in a docker container

Assuming I have a main application that runs by itself multiple sub-applications.
Is it possible to run that main application inside a container?
Currently only the main application starts but the others doesn't.
Docker containers are started from a single ENTRYPOINT but they can of course run multiple binaries. You could, for instance, have a shell script serve as an ENTRYPOINT and start call the other binaries from there. Depending on your application it might make sense to put the sub-applications you refer to into their own container.

Docker in Docker [duplicate]

This question already has answers here:
Is it ok to run docker from inside docker?
(5 answers)
Closed 4 years ago.
We have app and which will spin the short term (short span) docker containers. Right now, it runs in Ubunut16.04 server (VM) and we installed docker, and nodejs in same server. We have nodejs app which runs in same server so whenever a request comes in, then the nodejs app will spin up a docker container and execute a user input inside the docker container. Once after the docker finish its job or if it runs out of admin defined resources then the docker container will be forcefully killed (docker kill) and removed (docker rm).
Now my question is, is it best practices to run the Ubunte16.04 docker container and run nodjes app and the short term docker containers inside the Ubunuter16.04 docker container.
In short run a docker inside other docker container.
Docker-in-Docker is generally considered fragile and hard to maintain and using it isn’t a best practice. https://hub.docker.com/_/docker/ has a little discussion on this.
A straightforward (but potentially dangerous) way to rearrange this is to give the server process access to the host’s Docker socket, with docker run -v /var/run/docker.sock:/var/run/docker.sock. Then it could launch its own Docker containers as needed. Note that if you do this, these sub-containers’ docker run -v options refer to the host’s filesystem, not the calling container’s filesystem, so if you’re trying to use the filesystem to transfer data this can get tricky. Also note that being able to run any Docker command this way gives unlimited access to the host, so you need to be extremely careful about how you launch containers.
A larger redesign would be to introduce some sort of message-queueing system; I’ve successfully used RabbitMQ in the past but there are many other options. Instead of the server process launching a subprocess directly, it writes a message to a queue. Instead of the workers being short-lived processes that start and stop frequently, you have a long-lived worker that reads jobs off the queue and does them. This puts you in a much more established Docker space where nothing needs to dynamically start and stop containers, and you can easily test the Node-Rabbit-worker stack in a non-Docker environment.

Is it possible to launch a new Docker container from within a running Docker container using Docker Compose?

I have a Node.js application running inside a Docker Container.
I need to launch a new container from my Node.js application (via code; e.g. child_process.spawn()) with the sole purpose of running a Python script. I also need to pass one argument (a database record ID) to this Python script. So the command is:
python main.py 56fb661b7e51f80736d48113
Note that I do not want this container to run inside the current container but rather to be a separate container.
I understand an orchestration framework such as Swarm or Kubernates would be better suited for this task, but it has been requested that I use Docker Compose locally on my machine in my development environment, and then we will use Kubernates in production.
Is it possible to launch a new Docker container (just a container, not a whole new machine/VM) from within a running Docker container using Docker Compose, and if so, how might I go about doing so?
I haven't done it myself, but from what I gather if your have docker installed on your child container, if you make the docker socket of the host available in the child you are able to interact with it. i.e.
--volume=/var/run/docker.sock:/tmp/docker.sock
You'll need to config your child's docker process to point to that socket (presumably the DOCKER_HOST envvar should work?) but thats the basic idea. Running docker commands against that socket should work on the host.
https://github.com/gliderlabs/registrator use this method which might help give you some pointers.
Obviously, this method of using docker creates a number of issues, but if its best for your situation then go for it.

Docker instance port Management

I have different docker instances and I need to start node.js processes in each of these instances. For this to happen, is it needed that each of them start on different port numbers? How does the container manage that and is there a docker management system for it? I want the client to know which port has the instance initiated the node.js process. How can this be automated?
this problem is called "orchestration". I kind of think Docker is a bit overblown because it actually doesn't solve this problem.
Kubernetes is an open source tool. Tutum is an online service. Docker has started a tool but it's not done.
Honestly, it's a bit of a cluster-show at the moment. If you're not hosting 20+ instances, I'd recommend building bash scripts.
Currently, I use a bespoke solution made from DigitalOcean, Dokku, and bash scripting. This gives me the flexibility of a self-hosted Heroku like environment that is very dev friendly.
Dokku let's you deploy docker apps using a 'git push'. It reads files in your repo to build the image.
You don't have to start the applications inside docker on different ports. You can map any port (for example, port 80) inside the docker container to any port on your host machine.
There is no rule about how to use this to your benefit.
If your clients all have ids say in the 1-10000 range, you can map the docker container's port 80 to "client_id + 20000".

How to automatically start services inside a docker container

I am trying to find the best way to automatically start services inside a docker container once it has been restarted.
I don't mean starting the docker container on restart. I'm trying to achieve the following way:
I stop a container; and
when I start it again, the same services (processes) I was running before will start up again.
I.e. if I am running apache and ssh inside the container starting those service on container restart
That's really not the docker way (multiple processes per container). You can try to go down that path, as I did for several months, but you'll find that you'll be going against the docker team's design principles most of the time. I used the phusion/baseimage base image and it really is well designed, with a good init process and support for run-it and ssh out of the box. Tread carefully, if you go down that path however.

Resources