For my students in my teaching classes, I create short-lived Azure VM's based on an image that I have created using sysprep and captured. That all works great.
But my problem is that each time I sysprep and capture my master VM I lose it and that means that I have to recreate the master image from scratch each time I want to update it, and that takes many hours to do.
I have seen many fragile approaches by which they all seem to involve a lot of manual steps and low-level disk backup/copy/VHD's to get around this.
So my question is what is the best approach for me as a teacher to keep my master VM alive so that I don't have to re-create it from scratch each time I need to create a new image for my clones?
I am sure there must be a better way to do this?
For your requirement, I think you need to make a copy for your VM and then create the image from the copy VM, so your original VM will be alive. You can follow the copy steps here. Then create the image as before.
You need to create a new image when you update your VM each time, all the VM would be created from the image. So it's the only way to do that.
Related
I have a three container group running on Azure, and an additional run once container which I'm aiming to run once a day to update data on a file mount (which the server containers look at). Additionally I'm looking to then restart the containers in the group once this had updated. Is there any easy way to achieve this with the Azure stack?
Tasks seems like the right kind of thing, however I seem to only be able to mount secrets rather than standard volumes, which makes it not able to do what's required. Is there another solution I'm missing?
Thanks!
The Azure portal web interface has several options for creating 'images' of a VM including:
snapshot creates a snapshot of the machine which can presumably be restored or copied (what I am trying to do without much success so far)
capture generalises a VM into an image that can be used to create multiple VMs (in theory)
The capture option makes the original VM unusable. In fact you are prompted about whether you want to keep it as it will no longer run (which indeed it can't).
Why is capture a destructive operation?
When you generalise an image using sysprep it will remove the customisation from your VM and that particular VM be of no use except as a golden image. This golden image then can be used as a template to spin more VM’s by passing on the missing parameters which sysprep removed.
If you like to keep the VM, you are using for capture it is recommended that you make a copy of it first and then use it for capture and sysprep process
Refer below for details
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/capture-image-resource
Snapshot is more of a VM state at a given point in time and. Mostly used for migration to another region or to capture a state of an application running the VM before an application upgrade or patch.
I am working on an azure deployment. I am using some templates from github that creates a certain number of VM's based on a 'master image', puts them behind a load balancer, and allows access to them through RDP and ports.
Now, all this is working great. I build my image, then I run sysprep and generalize it, shut it down, and spin up 40 copies.
The issue I am running into is what do I do if I want to update the 'master image'?
It won't let me boot it up, because it says it is generalized. And I am having a hard time setting up a new vm and attaching the OS disk "not sure if this is the right way"
Does anyone have any suggestions? I am coming from a VMware VDI environment, where I would just boot up the master, make changes, shut down, and snapshot and redeploy.
Also I am using the new Azure interface, which I believe is called AzureRM.
Error message: Operation Start VM is not allowed on VM xxx since the VM is generalized.
Like versioning, you have to create a new VM from the image made before, and then repeat the process again after your changes.
Well, its not pretty, but it should work:
Spin up a fresh copy. make your changes, then preform the sysprep / oobe process again, finally, generalize & capture.
I have an Ubuntu 14 VM on Azure to host my developed web sites. (I do not think the OS matters in the point of view the question, but never know)
I've discovered the relatively new Capture button, so for the storage price of a disk size I regularly save a "snapshot" via the Capture function (I am not preparing the image for provisioning, I mean not checking the "I have run 'waagent -deprovision' on the virtual machine" checkbox). Be aware quickly becomes pretty addictive.
The result is an image what I can use when creating new machines, its there in My Images in the wizard. This can function as a backup/rollback worflow. If I delete the VM and create a new from one of resulting image of the previously captured "snapshots". (again, no provisioning will take place)
It is possible to initiate the Capture operation on a running VM. It is not clear for me, if the result will be an image what is a template for a new VM, and that VM will start up and boot, in what state the filesystem etc will be?
Is not it a similar state than sudden power lost? If yes, then it is strongly recommended to always shutdown the VM before capturing, however this such a pain and productivity killer, so no one (including me) wants to do unless it is mandatory.
Accidentally I've switched to the new Azure portal and there the Capture UI says:
Capturing a virtual machine while it's running isn't recommended.
I'm currently working on a project where users will upload projects, but others users will be able to clone those projects (think github-esque).
Now my initial idea is create a container for each project, making it easy to clone them. Though I will still store a reference to each file & it's location in the database.
Would creating a container for each project be the best option, or should I stick to a container per user? I know the file amount limits are huge in the containers, but I feel my initial plan would scale better.
Thoughts people?
This is just personal opinion as I am currently also using rackspace cloud in my project. I think that creating one container for each users will be still good option since you can copy, move object inside container.
And also by creating container for each user you can easily get current size object, container of users so that you can know what current free space they have without using additional calculation of it.