Supervisor: Stop the Docker container when a process crashes - linux

I would like to use Supervisor to run multiple processes in my Docker container, as described here, in Docker docs.
It works but the doc does not say anything about what happens when one of the processes I start crashes.
Following docker behavior logic - when a process crashes - container should stop, and probably later it should be restarted by Docker according to restart policy.
But it does not happen, If one (or all) of application I start exits - container keeps working.
How can I tell Supervisor to exit (and stop the container in this way, because I run it in nodaemon=true mode) as well, when one of monitoring processes exits/crashes?

I found this article which describes that its sometimes valid to run multiple processes in one container.
He describes how to use honcho to create the behaviour you would like: stop the whole container when one of the processes fails.
I'am going to try this now, but I'm still a little bit in doubt because supervisord is used so much more in the docker world and is also described on their own site.

if you want to exit the container when your process stops, don't use supervisor (or any other process manager). just run the process in your container, directly.
but more importantly: don't run multiple critical applications in your container. the golden rule of Docker containers is not 1 process per container, but 1 concern per container. that way your container can properly shut down when that 1 concern (application) exits.
even in the example you cite, they are not running 2 critical processes. they are running 1 app process and then hosting sshd in the same container for ssh access. if sshd stops, it's probably not a big deal. if the apache server stops... well, they're using supervisor to handle that and automatically restart it.
to get what you want, separate your concerns into multiple containers and just run the app in the container directly.

Related

When is a Docker Service Considered to be Running?

I am working on a debian based container which runs a c++ compiled application as a child process to a node.js script. For testing, I am attempting to define a swarm that instantiates this container as a service. When I run "docker service ls", it reports that the service is no running. If I issue "docker ps", however, I can see the identifier for the container and I can use "docker exec" to connect to it.
So far as I can see, the two target processes are running within the container yet, when other containers within the same swarm and using the same networks attempt to connect to this server, I see a name resolution error.
Here is my question: If the intended processes are actively and verifiably running within a container, why would docker consider a service based on that container to not be running? What is Docker's criteria for considering a container to be running as a service?
The status of starting means that it's waiting for the HEALTHCHECK condition to be satisfied.
Examine your Dockerfile's (or compose file's) HEALTHCHECK conditions and investigate why they are not being satisfied.
NOTE: Do not confuse starting with restarting, which means that the program is being launched again (usually after a crash) due to the restart policy.

How to audit commands run by user inside a container in K8s

I want to audit commands that are being run by a user inside a running pod.
I know that kube-apiserver supports audit policies that allows you to log every request that is being done towards the API but as far as I know the API audit only records the exec command and not the inner commands run afterwards.
An approach that I thought is to have a sidecar container with auditbeat running but it's too heavy and the user might be able to kill it.
The container should run a single process inside. It is not recommended to run a command inside container exception for testing. Most of our image doesn't have any type of shell.
If you have to spawn a shell and run a command inside, Then you need to think about is it possible to run that outside container? If the main process is terminated but your shell commands are running in a container then k8s might not terminate that pod and recreate a new pod which might impact HA
There are some commercial products that allow to do this. Few weeks ago I did a PoC for one of them.
The way it's implemented is that their product running as a pod (with 1 container inside) on the host level (host namespace / HostPID) and tracks usage of Docker daemon.

How to make one micro service instance at a time run a script (using dockers)

I'll keep it simple.
I have multiple instances of the same micro service (using dockers) and this micro service also responsible of syncing a cache.
Every X time it pulls data from some repository and stores it in cache.
The problem is that i need only 1 instance of this micro-service to do this job, and if it fails, i need another one to take it place.
Any suggestions how to do it simple?
Btw, is there an option to tag some micro-service docker instance and make him do some extra work?
Thanks!
The responsibility of restarting a failed service or scaling up/down is that of an orchestrator. For example, in my latest project, I used Docker Swarm.
Currently, Docker's restart policies are:
no: Do not automatically restart the container when it exits. This is the default.
on-failure[:max-retries]: Restart only if the container exits with a non-zero exit status. Optionally, limit the number of restart retries the Docker daemon attempts.
unless-stopped: Always restart the container regardless of the exit status. When you specify always, the Docker daemon will try to restart the container indefinitely. The container will also always start on daemon startup, regardless of the current state of the container.
always: Always restart the container regardless of the exit status, but do not start it on daemon startup if the container has been put to a stopped state before.

Docker container management solution

We've NodeJS applications running inside docker containers. Sometimes, if any process gets locked down or due to any other issue the app goes down and we've to manually login to each container n restart the application. I was wondering
if there is any sort of control panel that allow us to easily and quickly restart those and see the whole health of the system.
Please Note: we can't use --restart flag because essentially application doesn't exist with exist code. It run into problem like some process gets blocked, things are just getting bogged down vs any crashes and exist codes. That's why I don't think restart policy will help in this scenario.
I suggest you consider using the new HEALTHCHECK directive in Docker 1.12 to define a custom check for your locking condition. This feature can be combined with the new Docker swarm service feature to specify how many copies of your container you want to have running.

How to automatically start services inside a docker container

I am trying to find the best way to automatically start services inside a docker container once it has been restarted.
I don't mean starting the docker container on restart. I'm trying to achieve the following way:
I stop a container; and
when I start it again, the same services (processes) I was running before will start up again.
I.e. if I am running apache and ssh inside the container starting those service on container restart
That's really not the docker way (multiple processes per container). You can try to go down that path, as I did for several months, but you'll find that you'll be going against the docker team's design principles most of the time. I used the phusion/baseimage base image and it really is well designed, with a good init process and support for run-it and ssh out of the box. Tread carefully, if you go down that path however.

Resources