I tried creating a base Windows image (tag:2004) in Azure Container Instances and it took more than 10 minutes to start.
Is this normal? From what I've read it should take seconds to spin up a container.
Since Window images is quite large it containe many software and file, This might take time to spin up
Yes user238923, You are on right document, You can check window cache images in Azure Container Instance on specific location.
Cache window images in westcentralus location
Related
I had been using Azure ML studio for a while now and it was really fast but now when I try to unzip folders containing images around 3000 images using
!unzip "file.zip" -d "to unzip directory"
it took more than 30 minutes and other activities(longer concatenation methods) also seem to take a long time even using numpy arrays. Wondering if it is something with configuration or other problems. I have tried switching locations, creating new resource groups, workspaces, changing computes(Both CPU and GPU).
Compute and other set of current configurations can be seen on the image
When you are using a notebook, your local directory is persisted on a (remote) Blob Store. Consequently, you are limited by network delays and more significantly the IOPS your compute agent has.
What has worked for me is to use the local disk mounted on the compute agent. NOTE: This is not persisted and all the stuff on this will disappear when the compute agent is stopped.
After doing all your work, you can move the data to your persistent storage (which should be in your list of mounts). This might still be slow but you don't have to wait for it to complete.
For my students in my teaching classes, I create short-lived Azure VM's based on an image that I have created using sysprep and captured. That all works great.
But my problem is that each time I sysprep and capture my master VM I lose it and that means that I have to recreate the master image from scratch each time I want to update it, and that takes many hours to do.
I have seen many fragile approaches by which they all seem to involve a lot of manual steps and low-level disk backup/copy/VHD's to get around this.
So my question is what is the best approach for me as a teacher to keep my master VM alive so that I don't have to re-create it from scratch each time I need to create a new image for my clones?
I am sure there must be a better way to do this?
For your requirement, I think you need to make a copy for your VM and then create the image from the copy VM, so your original VM will be alive. You can follow the copy steps here. Then create the image as before.
You need to create a new image when you update your VM each time, all the VM would be created from the image. So it's the only way to do that.
I'm running an Azure Container Instance of a rather large image (~13GB). When I create a new instance it takes around 20 minutes to pull the image from the Azure Registry. When I update the image and then restart the container it also says it's pulling, but it only takes a few seconds. I tested it by changing the console output and it actually seems to update the image, but why is it taking so much less time?
ACI creates containers without you having to care about the underlying infrastructure, however under the hood these containers are still running on hosts. The first time you start your container, unless you are very lucky, it is unlikely that the underlying host has your container image cached and so it has to download the image, which for a large image will take a while.
When you restart a running container, most of the time it will restart on the same host, and so already have the old image cached. To update to the new image it will only need to download the difference, which is quick.
I have a Service Fabric cluster with 5 Windows VMs on it. I deploy an application that is a collection of about 10 difference containers. Each time I deploy, I increment the tag of the containers with a build number. For example:
foo.azurecr.io/api:50
foo.azurecr.io/web:50
Our build system continuously builds each service, tags it, pushes it to Azure, increments all the images in the ApplicationManifest.xml file, and then deploys the app to Azure. We probably build a new version a few times a day.
This works great, but over the course of a few weeks, the disk space on each VM fills up. This is because each VM still has all those old Docker images taking up disk space. Looking at it right now, there's about 50 gigs of old images sitting around. Eventually, this caused the deployment to fail.
My Question: Is there a standard way to clean up Docker images? Right now, the only idea I have is create some sort of Windows Scheduler task that runs a docker image prune --all every day or something. However, at some point we want to be able to create new VMs on the fly as needed so I'd rather each VM be a "stock" image. The other idea would be to use the same tag each time, such as api:latest and web:latest. However, then I'd have to figure out a way to get each VM to issue a docker pull command to get the latest version of the image.
Has anyone solved this problem before?
You can configure PruneContainerImages to True. This will enable the Service Fabric runtime to remove the unused container images. See this guide
I have an Ubuntu 14 VM on Azure to host my developed web sites. (I do not think the OS matters in the point of view the question, but never know)
I've discovered the relatively new Capture button, so for the storage price of a disk size I regularly save a "snapshot" via the Capture function (I am not preparing the image for provisioning, I mean not checking the "I have run 'waagent -deprovision' on the virtual machine" checkbox). Be aware quickly becomes pretty addictive.
The result is an image what I can use when creating new machines, its there in My Images in the wizard. This can function as a backup/rollback worflow. If I delete the VM and create a new from one of resulting image of the previously captured "snapshots". (again, no provisioning will take place)
It is possible to initiate the Capture operation on a running VM. It is not clear for me, if the result will be an image what is a template for a new VM, and that VM will start up and boot, in what state the filesystem etc will be?
Is not it a similar state than sudden power lost? If yes, then it is strongly recommended to always shutdown the VM before capturing, however this such a pain and productivity killer, so no one (including me) wants to do unless it is mandatory.
Accidentally I've switched to the new Azure portal and there the Capture UI says:
Capturing a virtual machine while it's running isn't recommended.