I made an image of a disk using CLONEZILLA. I checked the image and everything was fine. The image is of a 120GB disk. When I try to restore the image on a 1TB disk or any other disk with a capacity greater than 120GB I always get the message:
Target partition size (2MB) is smaller than source (105MB).
Use option -C do disable size checking (Dangerous).
I never came across this situation.
Any idea how to overcome this problem?
Thank you very much
This happened because CLONEZILLA does not support dynamic volumes.
Related
I'm running 12 nodes of Cassandra in AWS EC2 instance, 4 of them are using almost 80% of the disk space, so compaction failed on these nodes, since the type of the server is EC2 instance, I can't add mode disk space to the existing data volume on the fly, I can't ask IT team to add more nodes to scale and spread the clustre as disk space of other nodes is less than 40%, before fixing the unbalanced cluster issue, is there any way to free up some disk space?
My question is how can I find unused sstables and move them to another partition to run compaction and free up some space?
Any other suggestion to free up some disk space.
PS: I already dropped all the snapshots and backups.
If you are using vnodes then data sizes difference should not be that much. Before coming to solution we must find the reason for big difference in data sizes on different nodes.
You must look into logs to see if there is corruption of some big SStable which resulted in compaction failures and increase in data sizes. Or you can find something in your logs which points to the reason in increasing of disk sizes.
We faced an issue in Cassandra 2.1.16 due to some bug it happened that even after compaction old sstable files were not removed. We read the logs and identified the files which can be removed. This is an example where we found the reason of increased data size after reading the logs.
So your must identify the reason before solution. If it is a dire state you can identify keyspaces/tables which are not used during your main traffic and move those sstables in backup and remove those sstables. Once your compaction process is over you can bring them back.
Warning :Test any procedure before trying on production.
I want to create a new NC6 instance (for my GPU and ML works). Few days ago I created my first instance for the NC6 instance, it was 150 GB standard SSD one, it was too much for me, so I tried to change the disk to low price one, but I noticed you cannot shrink or swap/change the disk to reduced size one or HDD one. So, I ended up deleting the instance and now trying to create a new one, but here, seems there is no way to change disk size, it seems you can change "OS disk type" to SSD to HDD, but no disk size e.g. 150 GB to 32 GB. So, How do I specify disk size for new machine instance for Azure? Thanks.
The disk size of the OS disk is determined by the image of the VM that you choose.
When you create a VM you get a copy of the bace image, and that bace image has a size.
You cannot reduce the size, but you can increase it:
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/expand-os-disk
Bouncing off of Shiraz's answer. I was banging my head on this problem. I have a free account with Azure and I'm trying to keep it free by not using stupid options that incur charges. But the default Server 2019 images comes with 128GB with no option to use a smaller size. The P6 that comes with the free account is 64GB.
I found that if you go into all images, you can search for a [smalldisk] image. It comes at 32GB. In my case, I created the VM, shut it down, then expanded it up to P6. (Why wouldn't I use the full size that comes for free?)
Screenshots for Reference (Imgur)
As i'm getting this capacity issue, I'm not able to add volume. When I try to add a volume it throws error as select a range between 500gb and above, when i do that it throws another error as mentioned size is larger than space available where my available size shows ZERO bytes.
Images added for reference.
Thanks in Advance.
Can you please open a support ticket for your issue if you have not done so already?
https://learn.microsoft.com/en-us/azure/storsimple/storsimple-virtual-array-log-support-ticket
StorSimple virtual array needs to be assigned a data disk to store the local cache and it appears that something may be wrong with your local data disk configuration.
I am attempting to partition my Dell R710 for VM storage. Details:
Newly installed XenServer 7.2. Accepted Defaults.
5x 2TB Drives, Raid 5. Single Virtual Disk. Total storage: 8TB
All I want to do is add two partitions, a 4TB for VM storage, then whatever is left for media storage (~ 3.9TB).
When I run parted to try and create the first partition (4TB), I am receiving an error "Unable to satisfy all constraints on the partition." I have Googled and Googled, but am unable to find anything that seems to get me going in the right direction. Additionally, I get a strange message (see the bottom of the screenshot) suggesting I have an issue with my sectors perhaps (34...2047 available?).
Below is a screenshot that contains pertinent information as well as command output. Here's hoping someone can help. Thanks in advance!
You are attempting to write a partition to an already partitioned space. You will have to delete the lvm partition first.
So, I ended up booting from a Debian live cd and using gparted to tweak the partition size. This worked like a charm. Marking meatball's answer as correct as this was what lead me down this path.
In GlusterFS 3.4.3 server,
When I create a volume with
gluster volume create NEW-VOLNAME stripe COUNT NEW-BRICK-LIST...
and store some files, the volume consumes 1.5 times space of actual data stored, regardless of the number of stripes.
e.g. If I create 1GB file in the volume with
dd if=/dev/urandom of=random1gb bs=1M count=1000
It consumes 1.5GB of total disk space of the bricks. "ls -alhs", "du -hs", and "df -h" all indicate the same fact - 1.5GB of space used for an 1GB file. Inspecting each brick and summing up the usage also shows the same result.
Interestingly, this doesn't happen with the newer version, GlusterFS 3.5 server. i.e. 1GB file uses 1GB of total brick space - normal.
It's good that it is fixed in 3.5, but I cannot use 3.5 right now due to another issue.
I couldn't find any document or article about this. Do I have a wrong option(I left everything default)? Or is it a bug in 3.4? It seems to be too serious a problem to be just a bug.
If it is by design, why?? To me it looks like huge waste of storage for a storage system.
To be fair, I'd like to point out that GlusterFS works very well except for this issue. Excellent performance (especially with qemu-libgfapi integration), easy setup, and flexibility.