I am attempting to partition my Dell R710 for VM storage. Details:
Newly installed XenServer 7.2. Accepted Defaults.
5x 2TB Drives, Raid 5. Single Virtual Disk. Total storage: 8TB
All I want to do is add two partitions, a 4TB for VM storage, then whatever is left for media storage (~ 3.9TB).
When I run parted to try and create the first partition (4TB), I am receiving an error "Unable to satisfy all constraints on the partition." I have Googled and Googled, but am unable to find anything that seems to get me going in the right direction. Additionally, I get a strange message (see the bottom of the screenshot) suggesting I have an issue with my sectors perhaps (34...2047 available?).
Below is a screenshot that contains pertinent information as well as command output. Here's hoping someone can help. Thanks in advance!
You are attempting to write a partition to an already partitioned space. You will have to delete the lvm partition first.
So, I ended up booting from a Debian live cd and using gparted to tweak the partition size. This worked like a charm. Marking meatball's answer as correct as this was what lead me down this path.
Related
I made an image of a disk using CLONEZILLA. I checked the image and everything was fine. The image is of a 120GB disk. When I try to restore the image on a 1TB disk or any other disk with a capacity greater than 120GB I always get the message:
Target partition size (2MB) is smaller than source (105MB).
Use option -C do disable size checking (Dangerous).
I never came across this situation.
Any idea how to overcome this problem?
Thank you very much
This happened because CLONEZILLA does not support dynamic volumes.
As i'm getting this capacity issue, I'm not able to add volume. When I try to add a volume it throws error as select a range between 500gb and above, when i do that it throws another error as mentioned size is larger than space available where my available size shows ZERO bytes.
Images added for reference.
Thanks in Advance.
Can you please open a support ticket for your issue if you have not done so already?
https://learn.microsoft.com/en-us/azure/storsimple/storsimple-virtual-array-log-support-ticket
StorSimple virtual array needs to be assigned a data disk to store the local cache and it appears that something may be wrong with your local data disk configuration.
I'm running a two node Datastax AMI cluster on AWS. Yesterday, Cassandra started refusing connections from everything. The system logs showed nothing. After a lot of tinkering, I discovered that the commit logs had filled up all the disk space on the allotted mount and this seemed to be causing the connection refusal (deleted some of the commit logs, restarted and was able to connect).
I'm on DataStax AMI 2.5.1 and Cassandra 2.1.7
If I decide to wipe and restart everything from scratch, how do I ensure that this does not happen again?
You could try lowering the commitlog_total_space_in_mb setting in your cassandra.yaml. The default is 8192MB for 64-bit systems (it should be commented-out in your .yaml file... you'll have to un-comment it when setting it). It's usually a good idea to plan for that when sizing your disk(s).
You can verify this by running a du on your commitlog directory:
$ du -d 1 -h ./commitlog
8.1G ./commitlog
Although, a smaller commit log space will cause more frequent flushes (increased disk I/O), so you'll want to keep any eye on that.
Edit 20190318
Just had a related thought (on my 4-year-old answer). I saw that it received some attention recently, and wanted to make sure that the right information is out there.
It's important to note that sometimes the commit log can grow in an "out of control" fashion. Essentially, this can happen because the write load on the node exceeds Cassandra's ability to keep up with flushing the memtables (and thus, removing old commitlog files). If you find a node with dozens of commitlog files, and the number seems to keep growing, this might be your issue.
Essentially, your memtable_cleanup_threshold may be too low. Although this property is deprecated, you can still control how it is calculated by lowering the number of memtable_flush_writers.
memtable_cleanup_threshold = 1 / (memtable_flush_writers + 1)
The documentation has been updated as of 3.x, but used to say this:
# memtable_flush_writers defaults to the smaller of (number of disks,
# number of cores), with a minimum of 2 and a maximum of 8.
#
# If your data directories are backed by SSD, you should increase this
# to the number of cores.
#memtable_flush_writers: 8
...which (I feel) led to many folks setting this value WAY too high.
Assuming a value of 8, the memtable_cleanup_threshold is .111. When the footprint of all memtables exceeds this ratio of total memory available, flushing occurs. Too many flush (blocking) writers can prevent this from happening expediently. With a single /data dir, I recommend setting this value to 2.
In addition to decreasing the commitlog size as suggested by BryceAtNetwork23, a proper solution to ensure it won't happen again will have monitoring of the disk setup so that you are alerted when its getting full and have time to act/increase the disk size.
Seeing as you are using DataStax, you could set an alert for this in OpsCenter. Haven't used this within the cloud myself, but I imagine it would work. Alerts can be set by clicking Alerts in the top banner -> Manage Alerts -> Add Alert. Configure the mounts to watch and the thresholds to trigger on.
Or, I'm sure there are better tools to monitor disk space out there.
In GlusterFS 3.4.3 server,
When I create a volume with
gluster volume create NEW-VOLNAME stripe COUNT NEW-BRICK-LIST...
and store some files, the volume consumes 1.5 times space of actual data stored, regardless of the number of stripes.
e.g. If I create 1GB file in the volume with
dd if=/dev/urandom of=random1gb bs=1M count=1000
It consumes 1.5GB of total disk space of the bricks. "ls -alhs", "du -hs", and "df -h" all indicate the same fact - 1.5GB of space used for an 1GB file. Inspecting each brick and summing up the usage also shows the same result.
Interestingly, this doesn't happen with the newer version, GlusterFS 3.5 server. i.e. 1GB file uses 1GB of total brick space - normal.
It's good that it is fixed in 3.5, but I cannot use 3.5 right now due to another issue.
I couldn't find any document or article about this. Do I have a wrong option(I left everything default)? Or is it a bug in 3.4? It seems to be too serious a problem to be just a bug.
If it is by design, why?? To me it looks like huge waste of storage for a storage system.
To be fair, I'd like to point out that GlusterFS works very well except for this issue. Excellent performance (especially with qemu-libgfapi integration), easy setup, and flexibility.
In my C* 1.2.4 setup, I have an ssd drive of 200Gb for the data and a rotational drive for commit logs of 500Gb.
I had the unpleasant surprise during a scrub operation to fill in my ssd drive with the snapshots. That made the cassandra box unresponsive but it kept the status as up when doing nodetool status.
I am wondering if there is a way to specify the target directory for snapshots when doing a scrub.
Otherwise if you have ideas for workarounds?
I can do a column family at a time and then copy the snapshots folder, but I am open for smarter solutions.
Thanks,
H
Snapshots in Cassandra are created as hard links to your existing data files. This means at the time the snapshot is taken, it takes up almost no extra space. However, it causes the old files to remain so if you delete or update data, the old version is still there.
This means snapshots must be taken on the drive that stores the data. If you don't need the snapshot any more, just delete it with 'nodetool clearsnapshot' (see the nodetool help output for how to decide which snapshots to delete). If you want to keep the snapshot then you can move it elsewhere. It will only start using much disk space after a while, so you could keep it until you are happy the scrub didn't delete important data then delete the snapshot.