I have a node/mongo app deployed using ECS. The running task contains two containers, one for my node api, and another for my mongo database. When I push changes I make to my api and create a new task revision + update my service using it, it deploys the changes, but wipes my database clean every time.
With this setup, is updating a service always going to deploy a new
mongo container?
Any chance I could revert to the previous state of that service?
Would it be better to create a separate task for each container
Any help would be greatly appreciated
yes, it will always deploy a new container from the image that is mentioned in the task definition.
When UpdateService stops a task during a deployment, the equivalent of
docker stop is issued to the containers running in the task. This
results in a SIGTERM and a 30-second timeout.
update-service
No, When service updated it automatically remove the old container and image. because By default, the Amazon ECS container agent automatically cleans up stopped tasks and Docker images that are not being used by any tasks on your container instances.
You can control this behaviour using ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION
This variable specifies the time to wait before removing any
containers that belong to stopped tasks. The image cleanup process
cannot delete an image as long as there is a container that references
it. After images are not referenced by any containers (either stopped
or running), then the image becomes a candidate for cleanup. By
default, this parameter is set to 3 hours but you can reduce this
period to as low as 1 minute, if you need to for your application.
Yes, It would be better to have a separate task definition. Also, I would recommend mounting in the DB container to avoid such losses in future.
AWS ecs docker-volumes
Related
I'm trying to deploy the current version of Elastic Search in an Azure Container Instance using the Docker image, however, I need to set vm.max_map_count=262144. Although since the container continually tries to restart on max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] I can't hit the instance with any commands. Trying to disable restarts or continuing on Errors causes the container instance to fail.
From the comments it sounds like you may have resolved the issue. In general for future readers a possible troubleshooting guide is:
If container exits unsuccessfully
Try using EXEC for interactive debugging while container is running. This can be found in the Azure portal as well on the "Containers" tab.
Attempt to run to success on local docker if EXEC did not help.
Upload new container version after local success was found to your registry and try to redeploy to ACI.
If container exits successfully and repeatedly restarts
Verify you have a long-running command for the container.
Update the restart policy to Never so upon exit you can debug the terminated container group.
If you cannot find issues, follow the local steps and get a successful run with local Docker.
Hope this helps.
I have a Service Fabric cluster with 5 Windows VMs on it. I deploy an application that is a collection of about 10 difference containers. Each time I deploy, I increment the tag of the containers with a build number. For example:
foo.azurecr.io/api:50
foo.azurecr.io/web:50
Our build system continuously builds each service, tags it, pushes it to Azure, increments all the images in the ApplicationManifest.xml file, and then deploys the app to Azure. We probably build a new version a few times a day.
This works great, but over the course of a few weeks, the disk space on each VM fills up. This is because each VM still has all those old Docker images taking up disk space. Looking at it right now, there's about 50 gigs of old images sitting around. Eventually, this caused the deployment to fail.
My Question: Is there a standard way to clean up Docker images? Right now, the only idea I have is create some sort of Windows Scheduler task that runs a docker image prune --all every day or something. However, at some point we want to be able to create new VMs on the fly as needed so I'd rather each VM be a "stock" image. The other idea would be to use the same tag each time, such as api:latest and web:latest. However, then I'd have to figure out a way to get each VM to issue a docker pull command to get the latest version of the image.
Has anyone solved this problem before?
You can configure PruneContainerImages to True. This will enable the Service Fabric runtime to remove the unused container images. See this guide
I'll keep it simple.
I have multiple instances of the same micro service (using dockers) and this micro service also responsible of syncing a cache.
Every X time it pulls data from some repository and stores it in cache.
The problem is that i need only 1 instance of this micro-service to do this job, and if it fails, i need another one to take it place.
Any suggestions how to do it simple?
Btw, is there an option to tag some micro-service docker instance and make him do some extra work?
Thanks!
The responsibility of restarting a failed service or scaling up/down is that of an orchestrator. For example, in my latest project, I used Docker Swarm.
Currently, Docker's restart policies are:
no: Do not automatically restart the container when it exits. This is the default.
on-failure[:max-retries]: Restart only if the container exits with a non-zero exit status. Optionally, limit the number of restart retries the Docker daemon attempts.
unless-stopped: Always restart the container regardless of the exit status. When you specify always, the Docker daemon will try to restart the container indefinitely. The container will also always start on daemon startup, regardless of the current state of the container.
always: Always restart the container regardless of the exit status, but do not start it on daemon startup if the container has been put to a stopped state before.
We've NodeJS applications running inside docker containers. Sometimes, if any process gets locked down or due to any other issue the app goes down and we've to manually login to each container n restart the application. I was wondering
if there is any sort of control panel that allow us to easily and quickly restart those and see the whole health of the system.
Please Note: we can't use --restart flag because essentially application doesn't exist with exist code. It run into problem like some process gets blocked, things are just getting bogged down vs any crashes and exist codes. That's why I don't think restart policy will help in this scenario.
I suggest you consider using the new HEALTHCHECK directive in Docker 1.12 to define a custom check for your locking condition. This feature can be combined with the new Docker swarm service feature to specify how many copies of your container you want to have running.
I'm trying to figure out if best practices would dictate that when deploying a new version of my web app (nodejs running in its own container) I should:
Do a git pull from inside the container and update "in place"; or
Create a new container with the new code and perform a hot swap of the two docker containers
I may be missing some technical details as I'm very new to the idea of containers.
The second approach is the best practice: you would make a second version of your image (with the new code), stop your container, and run a second container based on that second version.
The idea is that you can easily roll-back as the first version of your image can be used to run the container that was initially in production at any time.
Trying to modify a running container is not a good idea as, once it is stopped and removed, running it again would be from the original image, with its original state. Unless you commit that container to a new image, those changes would be lost. And even if you did commit, you would not be able to easily rebuild that image. (plus you would commit the all container: its new code, but also a bunch of additional files created during the execution of the server: logs and other files: not very clean)
A container is supposed to be run from an image that you can precisely build from the specifications of a Dockerfile. It is not supposed to be modified at runtime.
Couple of caveat though:
if your container is used (--link) by other containers, you would beed to stop those first, stop your container and run a new one from a new version of the image, then restart your other containers.
don't forget to remount any data containers that you were using in order to get your persistent data.