How do you rename a GCE persistent disk? - rename

This should be easy but ...
I have been working on a Google Compute Engine persistent disk image that I'm calling utilserver, and basically now need to build it again from scratch, but I might need the original one to try a few things in case problems come up. So I'd like to rename utilserver to utilserver-backup and then create a new utilserver that will hopefully end up being more correct. However, under the web console for my project there's only a "Delete" button, no "Rename" button. Neither does gcutil seem to have a rename command. Ok, I tried creating a snapshot of utilserver and then from that a new persistent disk called utilserver-backup, but when I did that the new disk looked like a completely new image--none of my prior installation work was on there. Any ideas here?

You can create a snapshot of your disk and then can create multiple disk from that snapshot. By creating the snapshot you will have the backup of your original disk. You can then delete the original disk and create a new one with the same name. You can refer to the following link for more details about snapshot: https://cloud.google.com/compute/docs/disks/create-snapshots
I personally have tried creating a new disk from snapshot using the following command and it created a new disk with all my data
gcutil adddisk <disk-name> --project=<project-id> --source_snapshot=<snapshot-name>

gcutil has been deprecated in favor of gcloud compute.
gcloud compute disks create <new-disk-name> --source-snapshot <snapshot-name> --zone=<zone-name>
Example:
gcloud compute disks create production --source-snapshot production-backup-2023-01-23 --zone=asia-southeast1-b

Related

Recover deleted folder from Google VPS

We have a VPS running on Google Cloud which had a very important folder in a user directory. An employee of ours deleted that folder and we can't seem to figure out how to recover it. I came across extundelete but it seems the partition needs to be unmounted for it to work but I don't understand how I would do it on Google. This project took more than a year and that was the latest copy after a fire which took out the last copy from our local servers.
Could anyone please help or guide me in the right direction?
Getting any files back from your VM's disk may be tricky (at best) or impossible (most probably) if the files got overwritten.
Easiest way would be to get them back from a copy or snapshot of your VM's disk. If you have a snapshot of your disk (either taken manually or automatically) from before when the folder in question got delete then you will get your files back.
If you don't have any backups then you may try to recover the files - I've found many guides and tutorials, let me just link the ones I believe would help you the most:
Unix/Linux undelete/recover deleted files
Recovering accidentally deleted files
Get list of files deleted by rm -rf
------------- UPDATE -----------
Your last chance in this battle is to make two clones of the disk
and then detach original disk from the VM and attach one of the clones to keep your VM running. Then use second clone for any experiments. Keep the original untouched in case you mess up the second clone.
Now create a new Windows VM and attach your second clone as the additional disk. At this moment you're ready to try various data redovery software;
UFS Explorer
Virtual Machine Data Recovery
There are plenty of others to try from too.
Another approach would be to create an image from the original disk and export it as a VMDK imagae (and save it to a storage bucket). Then download it to yor local computer and then use for example VMware VMDK Recovery or other specialized software for extracting data from virtual machines disk images.

Creating pv backups on AKS managed disks (dynamic) using velero

I'm currently trying out Azure AKS and during setup I obviously also want to make backups. For this the best practice seems to be velero. According to the documentation of velero to include pv snapshots you would annotate the pod/deployment. Example:
backup.velero.io/backup-volumes: wp-pv
Note the above is when using a static managed disk. I can see the snapshot is created. However, when I do a restore a new pv is created instead of using the one from the restore. Is this expected behavior?
Ideally, I would like to use dynamic pv's instead but this would make it even more trivial because I don't know what name the pv will have and thus can't add proper annotations beforehand.
How can I solve this in a clean way? My ideal situation would be to have scheduled backups using velero and in case of a recovery automatically have it use the snapshot as base for the pv instead of it creating a new one that doesn't contain my data. For now, it seems this is a manual procedure? Am I missing something?
This is by design.
PersistantVolumes by definitions can only ever belong to one PVC claimant. Even when set as dynamic.
I think what you want is to have the reclaim policy set to retain. See here:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#retain
A state of "Retain" should mean that the PVs data persists, it is just needing to be reclaimed by a new PV/PVC. The AKS should pick up on this... But I've only ever done this with AWS/Baremetal
In this case Velero, rightly, has to both recreate the PVC and PV for the volume to be released and reassigned to the new claimant.

How to create Azure Windows images in Azure while keeping master VM?

For my students in my teaching classes, I create short-lived Azure VM's based on an image that I have created using sysprep and captured. That all works great.
But my problem is that each time I sysprep and capture my master VM I lose it and that means that I have to recreate the master image from scratch each time I want to update it, and that takes many hours to do.
I have seen many fragile approaches by which they all seem to involve a lot of manual steps and low-level disk backup/copy/VHD's to get around this.
So my question is what is the best approach for me as a teacher to keep my master VM alive so that I don't have to re-create it from scratch each time I need to create a new image for my clones?
I am sure there must be a better way to do this?
For your requirement, I think you need to make a copy for your VM and then create the image from the copy VM, so your original VM will be alive. You can follow the copy steps here. Then create the image as before.
You need to create a new image when you update your VM each time, all the VM would be created from the image. So it's the only way to do that.

Unable to increase disk size on file system

I'm currently trying to log in to one of the instances created on google cloud, but found myself unable to do so. Somehow the machine escaped my attention and the hard disk got completely full. Of course I wanted to free some disk space and make sure the server running could restart, but I am facing some issues.
First off, I have found the guide on increasing the size of the persistent disk (https://cloud.google.com/compute/docs/disks/add-persistent-disk). I followed that and already set it 50 GB which should be fine for now.
However, on file system level because my disk is full I cannot make any SSH connection. The error is simply a timeout caused by the fact that there is absolutely no space for the SSH deamon to write to its log. Without any form of connection I cannot free some disk space and/or run the "resize2fs" command.
Furthermore, I already tried different approaches.
I seem to not be able to change the boot disk to something else.
I created a snapshot and tried to increase the disk size on the new
instance I created from that snapshot, but it has the same problem
(filesystem is stuck at 15GB).
I am not allowed to mount the disk as an additional disk in another
instance.
Currently I'm pretty much out of ideas. The important data on the disk was back-upped but I'd rather have the settings working as well. Does anyone have any clues as where to start?
[EDIT]
Currently still trying out new things. I have also tried to run shutdown- and startup scripts that remove /opt/* in order to free some temporary space but the script either don't run or provide some error I cannot catch. It's pretty frustrating working nearly blind I must say.
The next step for me would be to try and get the snapshot locally. It should be doable using the bucket but I will let you know.
[EDIT2]
Getting a snapshot locally is not an option either or so it seems. Images from the google cloud instances can only be created or deleted, but not downloaded.
I'm now out of ideas.
So I finally found the answer. These steps were taken:
In the GUI I increased the size of the disk to 50 GB.
In the GUI I detached the drive by deleting the machine whilst
ensuring that I did not throw away the original disk.
In the GUI I created a new machine with a sufficiently big harddisk.
On the command line (important!!) I attached the disk to the newly
created machine (the GUI option has a bug still ...)
After that I could mount the disk as a secondary disk and perform all the operations I needed.
Keep in mind: By default google cloud solutions do NOT use logical volume management, so pvresize/lvresize/etc. is not installed and resize2fs might not work out of the box.

Keep the Cache while Mount a new Snapshot of Azure CloudDrive

The local cache for Azure CloudDrive is great for performance.
I recreate a new snapshot every 20 minutes, with little change or maybe no change at all in each snapshot.
To use the new snapshot, what I done is:
cloudDrive.Unmount();
cloudDrive = storageAccount.CreateCloudDrive(newSnapshotUri);
cloudDrive.Mount(size, option);
I like to know, will the old cache still be use for the new mount snapshot? Or the whole cache has to be rebuild again?
This is purely an educated guess, but I assume the cache is rebuilt. My reasoning is that you can't know when you mount the new snapshot what data is the same and what's different, so it would be impossible to figure out what to invalidate in the cache.

Resources