How to make one micro service instance at a time run a script (using dockers) - node.js

I'll keep it simple.
I have multiple instances of the same micro service (using dockers) and this micro service also responsible of syncing a cache.
Every X time it pulls data from some repository and stores it in cache.
The problem is that i need only 1 instance of this micro-service to do this job, and if it fails, i need another one to take it place.
Any suggestions how to do it simple?
Btw, is there an option to tag some micro-service docker instance and make him do some extra work?
Thanks!

The responsibility of restarting a failed service or scaling up/down is that of an orchestrator. For example, in my latest project, I used Docker Swarm.
Currently, Docker's restart policies are:
no: Do not automatically restart the container when it exits. This is the default.
on-failure[:max-retries]: Restart only if the container exits with a non-zero exit status. Optionally, limit the number of restart retries the Docker daemon attempts.
unless-stopped: Always restart the container regardless of the exit status. When you specify always, the Docker daemon will try to restart the container indefinitely. The container will also always start on daemon startup, regardless of the current state of the container.
always: Always restart the container regardless of the exit status, but do not start it on daemon startup if the container has been put to a stopped state before.

Related

When is a Docker Service Considered to be Running?

I am working on a debian based container which runs a c++ compiled application as a child process to a node.js script. For testing, I am attempting to define a swarm that instantiates this container as a service. When I run "docker service ls", it reports that the service is no running. If I issue "docker ps", however, I can see the identifier for the container and I can use "docker exec" to connect to it.
So far as I can see, the two target processes are running within the container yet, when other containers within the same swarm and using the same networks attempt to connect to this server, I see a name resolution error.
Here is my question: If the intended processes are actively and verifiably running within a container, why would docker consider a service based on that container to not be running? What is Docker's criteria for considering a container to be running as a service?
The status of starting means that it's waiting for the HEALTHCHECK condition to be satisfied.
Examine your Dockerfile's (or compose file's) HEALTHCHECK conditions and investigate why they are not being satisfied.
NOTE: Do not confuse starting with restarting, which means that the program is being launched again (usually after a crash) due to the restart policy.

AWS ECS Updating service wipes mongo container

I have a node/mongo app deployed using ECS. The running task contains two containers, one for my node api, and another for my mongo database. When I push changes I make to my api and create a new task revision + update my service using it, it deploys the changes, but wipes my database clean every time.
With this setup, is updating a service always going to deploy a new
mongo container?
Any chance I could revert to the previous state of that service?
Would it be better to create a separate task for each container
Any help would be greatly appreciated
yes, it will always deploy a new container from the image that is mentioned in the task definition.
When UpdateService stops a task during a deployment, the equivalent of
docker stop is issued to the containers running in the task. This
results in a SIGTERM and a 30-second timeout.
update-service
No, When service updated it automatically remove the old container and image. because By default, the Amazon ECS container agent automatically cleans up stopped tasks and Docker images that are not being used by any tasks on your container instances.
You can control this behaviour using ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION
This variable specifies the time to wait before removing any
containers that belong to stopped tasks. The image cleanup process
cannot delete an image as long as there is a container that references
it. After images are not referenced by any containers (either stopped
or running), then the image becomes a candidate for cleanup. By
default, this parameter is set to 3 hours but you can reduce this
period to as low as 1 minute, if you need to for your application.
Yes, It would be better to have a separate task definition. Also, I would recommend mounting in the DB container to avoid such losses in future.
AWS ecs docker-volumes

Supervisor: Stop the Docker container when a process crashes

I would like to use Supervisor to run multiple processes in my Docker container, as described here, in Docker docs.
It works but the doc does not say anything about what happens when one of the processes I start crashes.
Following docker behavior logic - when a process crashes - container should stop, and probably later it should be restarted by Docker according to restart policy.
But it does not happen, If one (or all) of application I start exits - container keeps working.
How can I tell Supervisor to exit (and stop the container in this way, because I run it in nodaemon=true mode) as well, when one of monitoring processes exits/crashes?
I found this article which describes that its sometimes valid to run multiple processes in one container.
He describes how to use honcho to create the behaviour you would like: stop the whole container when one of the processes fails.
I'am going to try this now, but I'm still a little bit in doubt because supervisord is used so much more in the docker world and is also described on their own site.
if you want to exit the container when your process stops, don't use supervisor (or any other process manager). just run the process in your container, directly.
but more importantly: don't run multiple critical applications in your container. the golden rule of Docker containers is not 1 process per container, but 1 concern per container. that way your container can properly shut down when that 1 concern (application) exits.
even in the example you cite, they are not running 2 critical processes. they are running 1 app process and then hosting sshd in the same container for ssh access. if sshd stops, it's probably not a big deal. if the apache server stops... well, they're using supervisor to handle that and automatically restart it.
to get what you want, separate your concerns into multiple containers and just run the app in the container directly.

Docker container management solution

We've NodeJS applications running inside docker containers. Sometimes, if any process gets locked down or due to any other issue the app goes down and we've to manually login to each container n restart the application. I was wondering
if there is any sort of control panel that allow us to easily and quickly restart those and see the whole health of the system.
Please Note: we can't use --restart flag because essentially application doesn't exist with exist code. It run into problem like some process gets blocked, things are just getting bogged down vs any crashes and exist codes. That's why I don't think restart policy will help in this scenario.
I suggest you consider using the new HEALTHCHECK directive in Docker 1.12 to define a custom check for your locking condition. This feature can be combined with the new Docker swarm service feature to specify how many copies of your container you want to have running.

How to automatically start services inside a docker container

I am trying to find the best way to automatically start services inside a docker container once it has been restarted.
I don't mean starting the docker container on restart. I'm trying to achieve the following way:
I stop a container; and
when I start it again, the same services (processes) I was running before will start up again.
I.e. if I am running apache and ssh inside the container starting those service on container restart
That's really not the docker way (multiple processes per container). You can try to go down that path, as I did for several months, but you'll find that you'll be going against the docker team's design principles most of the time. I used the phusion/baseimage base image and it really is well designed, with a good init process and support for run-it and ssh out of the box. Tread carefully, if you go down that path however.

Resources