I am trying to find the best way to automatically start services inside a docker container once it has been restarted.
I don't mean starting the docker container on restart. I'm trying to achieve the following way:
I stop a container; and
when I start it again, the same services (processes) I was running before will start up again.
I.e. if I am running apache and ssh inside the container starting those service on container restart
That's really not the docker way (multiple processes per container). You can try to go down that path, as I did for several months, but you'll find that you'll be going against the docker team's design principles most of the time. I used the phusion/baseimage base image and it really is well designed, with a good init process and support for run-it and ssh out of the box. Tread carefully, if you go down that path however.
Related
This question already has answers here:
Is it ok to run docker from inside docker?
(5 answers)
Closed 4 years ago.
We have app and which will spin the short term (short span) docker containers. Right now, it runs in Ubunut16.04 server (VM) and we installed docker, and nodejs in same server. We have nodejs app which runs in same server so whenever a request comes in, then the nodejs app will spin up a docker container and execute a user input inside the docker container. Once after the docker finish its job or if it runs out of admin defined resources then the docker container will be forcefully killed (docker kill) and removed (docker rm).
Now my question is, is it best practices to run the Ubunte16.04 docker container and run nodjes app and the short term docker containers inside the Ubunuter16.04 docker container.
In short run a docker inside other docker container.
Docker-in-Docker is generally considered fragile and hard to maintain and using it isn’t a best practice. https://hub.docker.com/_/docker/ has a little discussion on this.
A straightforward (but potentially dangerous) way to rearrange this is to give the server process access to the host’s Docker socket, with docker run -v /var/run/docker.sock:/var/run/docker.sock. Then it could launch its own Docker containers as needed. Note that if you do this, these sub-containers’ docker run -v options refer to the host’s filesystem, not the calling container’s filesystem, so if you’re trying to use the filesystem to transfer data this can get tricky. Also note that being able to run any Docker command this way gives unlimited access to the host, so you need to be extremely careful about how you launch containers.
A larger redesign would be to introduce some sort of message-queueing system; I’ve successfully used RabbitMQ in the past but there are many other options. Instead of the server process launching a subprocess directly, it writes a message to a queue. Instead of the workers being short-lived processes that start and stop frequently, you have a long-lived worker that reads jobs off the queue and does them. This puts you in a much more established Docker space where nothing needs to dynamically start and stop containers, and you can easily test the Node-Rabbit-worker stack in a non-Docker environment.
I would like to use Supervisor to run multiple processes in my Docker container, as described here, in Docker docs.
It works but the doc does not say anything about what happens when one of the processes I start crashes.
Following docker behavior logic - when a process crashes - container should stop, and probably later it should be restarted by Docker according to restart policy.
But it does not happen, If one (or all) of application I start exits - container keeps working.
How can I tell Supervisor to exit (and stop the container in this way, because I run it in nodaemon=true mode) as well, when one of monitoring processes exits/crashes?
I found this article which describes that its sometimes valid to run multiple processes in one container.
He describes how to use honcho to create the behaviour you would like: stop the whole container when one of the processes fails.
I'am going to try this now, but I'm still a little bit in doubt because supervisord is used so much more in the docker world and is also described on their own site.
if you want to exit the container when your process stops, don't use supervisor (or any other process manager). just run the process in your container, directly.
but more importantly: don't run multiple critical applications in your container. the golden rule of Docker containers is not 1 process per container, but 1 concern per container. that way your container can properly shut down when that 1 concern (application) exits.
even in the example you cite, they are not running 2 critical processes. they are running 1 app process and then hosting sshd in the same container for ssh access. if sshd stops, it's probably not a big deal. if the apache server stops... well, they're using supervisor to handle that and automatically restart it.
to get what you want, separate your concerns into multiple containers and just run the app in the container directly.
We've NodeJS applications running inside docker containers. Sometimes, if any process gets locked down or due to any other issue the app goes down and we've to manually login to each container n restart the application. I was wondering
if there is any sort of control panel that allow us to easily and quickly restart those and see the whole health of the system.
Please Note: we can't use --restart flag because essentially application doesn't exist with exist code. It run into problem like some process gets blocked, things are just getting bogged down vs any crashes and exist codes. That's why I don't think restart policy will help in this scenario.
I suggest you consider using the new HEALTHCHECK directive in Docker 1.12 to define a custom check for your locking condition. This feature can be combined with the new Docker swarm service feature to specify how many copies of your container you want to have running.
I have a Node.js application running inside a Docker Container.
I need to launch a new container from my Node.js application (via code; e.g. child_process.spawn()) with the sole purpose of running a Python script. I also need to pass one argument (a database record ID) to this Python script. So the command is:
python main.py 56fb661b7e51f80736d48113
Note that I do not want this container to run inside the current container but rather to be a separate container.
I understand an orchestration framework such as Swarm or Kubernates would be better suited for this task, but it has been requested that I use Docker Compose locally on my machine in my development environment, and then we will use Kubernates in production.
Is it possible to launch a new Docker container (just a container, not a whole new machine/VM) from within a running Docker container using Docker Compose, and if so, how might I go about doing so?
I haven't done it myself, but from what I gather if your have docker installed on your child container, if you make the docker socket of the host available in the child you are able to interact with it. i.e.
--volume=/var/run/docker.sock:/tmp/docker.sock
You'll need to config your child's docker process to point to that socket (presumably the DOCKER_HOST envvar should work?) but thats the basic idea. Running docker commands against that socket should work on the host.
https://github.com/gliderlabs/registrator use this method which might help give you some pointers.
Obviously, this method of using docker creates a number of issues, but if its best for your situation then go for it.
I'm planning to set up a jenkins-based CD workflow with Docker at the end.
My idea is to automatically build (by Jenkins) a docker image for every green build, then deploy that image either by jenkins or by 'hand' (I'm not yet sure whether I want to automatically run each green build).
Getting to the point of having a new image built is easy. My question is about the deployment itself. What's the best practice to 'reload' or 'restart' a running docker container? Suppose the image changed for the container, how do I gracefully reload it while having a service running inside? Do I need to do the traditional dance with multiple running containers and load balancing or is there a 'dockery' way?
Suppose the image changed for the container, how do I gracefully reload it while having a service running inside?
You don't want this.
Docker is a simple system for managing apps and their dependencies. It's simple and robust because ALL dependencies of an application are bundled with it. If your app runs today on your laptop, it will run tomorrow on your server. This is because we have captured 100% of the "inputs" for your application.
As soon as you introduce concepts like "upgrade" and "restart", your application can (accidentally) store state internally. That means it might behave differently tomorrow than it does today (after being restarted and upgraded 100 times).
It's better use a load balancer (or similar) to transition between your versions than to try and muck with the philosophy of Docker.
The Docker machine itself should always be immutable as you have to replace it for a new deployment. Storing state inside the Docker container will not work when you want to ship new releases often that you've built on your CI.
Docker supports Volumes which will let you write files that are permanent into some folder on the host. When you then upgrade the Docker container you use the same volume so you've got access to the same files written by the old container:
https://docs.docker.com/userguide/dockervolumes/