Does a restart of an Azure Container Instance update the image? - azure

I'm running an Azure Container Instance of a rather large image (~13GB). When I create a new instance it takes around 20 minutes to pull the image from the Azure Registry. When I update the image and then restart the container it also says it's pulling, but it only takes a few seconds. I tested it by changing the console output and it actually seems to update the image, but why is it taking so much less time?

ACI creates containers without you having to care about the underlying infrastructure, however under the hood these containers are still running on hosts. The first time you start your container, unless you are very lucky, it is unlikely that the underlying host has your container image cached and so it has to download the image, which for a large image will take a while.
When you restart a running container, most of the time it will restart on the same host, and so already have the old image cached. To update to the new image it will only need to download the difference, which is quick.

Related

Windows image in Azure Container Instances takes to long to start

I tried creating a base Windows image (tag:2004) in Azure Container Instances and it took more than 10 minutes to start.
Is this normal? From what I've read it should take seconds to spin up a container.
Since Window images is quite large it containe many software and file, This might take time to spin up
Yes user238923, You are on right document, You can check window cache images in Azure Container Instance on specific location.
Cache window images in westcentralus location

AWS ECS Updating service wipes mongo container

I have a node/mongo app deployed using ECS. The running task contains two containers, one for my node api, and another for my mongo database. When I push changes I make to my api and create a new task revision + update my service using it, it deploys the changes, but wipes my database clean every time.
With this setup, is updating a service always going to deploy a new
mongo container?
Any chance I could revert to the previous state of that service?
Would it be better to create a separate task for each container
Any help would be greatly appreciated
yes, it will always deploy a new container from the image that is mentioned in the task definition.
When UpdateService stops a task during a deployment, the equivalent of
docker stop is issued to the containers running in the task. This
results in a SIGTERM and a 30-second timeout.
update-service
No, When service updated it automatically remove the old container and image. because By default, the Amazon ECS container agent automatically cleans up stopped tasks and Docker images that are not being used by any tasks on your container instances.
You can control this behaviour using ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION
This variable specifies the time to wait before removing any
containers that belong to stopped tasks. The image cleanup process
cannot delete an image as long as there is a container that references
it. After images are not referenced by any containers (either stopped
or running), then the image becomes a candidate for cleanup. By
default, this parameter is set to 3 hours but you can reduce this
period to as low as 1 minute, if you need to for your application.
Yes, It would be better to have a separate task definition. Also, I would recommend mounting in the DB container to avoid such losses in future.
AWS ecs docker-volumes

Cleaning up old Docker images on Service Fabric cluster

I have a Service Fabric cluster with 5 Windows VMs on it. I deploy an application that is a collection of about 10 difference containers. Each time I deploy, I increment the tag of the containers with a build number. For example:
foo.azurecr.io/api:50
foo.azurecr.io/web:50
Our build system continuously builds each service, tags it, pushes it to Azure, increments all the images in the ApplicationManifest.xml file, and then deploys the app to Azure. We probably build a new version a few times a day.
This works great, but over the course of a few weeks, the disk space on each VM fills up. This is because each VM still has all those old Docker images taking up disk space. Looking at it right now, there's about 50 gigs of old images sitting around. Eventually, this caused the deployment to fail.
My Question: Is there a standard way to clean up Docker images? Right now, the only idea I have is create some sort of Windows Scheduler task that runs a docker image prune --all every day or something. However, at some point we want to be able to create new VMs on the fly as needed so I'd rather each VM be a "stock" image. The other idea would be to use the same tag each time, such as api:latest and web:latest. However, then I'd have to figure out a way to get each VM to issue a docker pull command to get the latest version of the image.
Has anyone solved this problem before?
You can configure PruneContainerImages to True. This will enable the Service Fabric runtime to remove the unused container images. See this guide

Restoring the state of docker containers

I have a docker container in a hyperledger fabric setup. This stores all user credentials.
What happens if this container or machine goes down and is not available?
If I bring up a backup container, how can the entire state be restored?
I tried doing the commit option but on bringing it back up, it does not work as expected. More likely the CA functionality uses some container id to track since a CA server is highly secretive price of the setup.
Overall, this is more of a strategy question, there are many approaches to backing up critical data - and you may or may not choose one that is specific to Docker containers.
On the technical questions that you asked:
If the container 'goes down', its files remain intact and will be there when it is restarted (that is, if you re-start the same container and don't create a new one). If the machine goes down, the container will come 'back up' if and when the machine is restarted. Depending on how you created the container, you may need to start it yourself or Docker may restart it automatically. If it went down hard and won't come back - you lose all data on it, including files in containers.
You can create a 'backup container' (or more precisely, a backup image), but if it was left on the same machine it will die with that machine. You will need to save it elsewhere (e.g., with 'docker push', though I don't recommend that unless you have your own docker registry to use for backups).
If you do 'commit', this simply creates a new container image, which has the files as they were when you did the commit. You should commit a stopped container, if you want a proper copy of all files - I don't think you can do it while there are active open files. This copy lives on the same machine where the container was, so you still need to save it away from that machine to protect it from loss. Note that to use the saved image, you should tag it and use it to start a new container. The image from which you started the old container is untouched by the 'commit' (using that old image will start the container as it was then, when you first created it).
IMO, an option better than 'commit' (which saves the entire container file system, along with all the junk like logs and temp. files) is to mount a docker volume to the path where important files are stored (e.g., /var/lib/mysql, if you run a mysql database) - and back up only that volume.

Should you recreate containers when deploying web app?

I'm trying to figure out if best practices would dictate that when deploying a new version of my web app (nodejs running in its own container) I should:
Do a git pull from inside the container and update "in place"; or
Create a new container with the new code and perform a hot swap of the two docker containers
I may be missing some technical details as I'm very new to the idea of containers.
The second approach is the best practice: you would make a second version of your image (with the new code), stop your container, and run a second container based on that second version.
The idea is that you can easily roll-back as the first version of your image can be used to run the container that was initially in production at any time.
Trying to modify a running container is not a good idea as, once it is stopped and removed, running it again would be from the original image, with its original state. Unless you commit that container to a new image, those changes would be lost. And even if you did commit, you would not be able to easily rebuild that image. (plus you would commit the all container: its new code, but also a bunch of additional files created during the execution of the server: logs and other files: not very clean)
A container is supposed to be run from an image that you can precisely build from the specifications of a Dockerfile. It is not supposed to be modified at runtime.
Couple of caveat though:
if your container is used (--link) by other containers, you would beed to stop those first, stop your container and run a new one from a new version of the image, then restart your other containers.
don't forget to remount any data containers that you were using in order to get your persistent data.

Resources