How to reset gluster volume top list - glusterfs

I'm new at gluster file system.
I want reset list of
# gluster volume top write list-cnt 10
this.
cuz I fixed some invalid writing and reading actions.
I need to recount write and read count.
Thanks.

Related

How can I quickly erase all partition information and data on partitions in LInux?

I'm testing a program to use on Raspberry Pi OS. A good part of what it does is read the partitioning info on the system drive, which is going to be (in this case), /boot and / and no extra partitions, just the two. I'm using a Python script that calls sfdisk. I do what so many examples show: I get the info from the system drive, read it as output, then use it as input to run the command to format the target drive.
I'm using Python and doing this with subprocess.run(). The script I'm writing, when it writes the 2nd partition on the target drive, writes it as a small size, then I use parted to extend the partition to the end of the drive. In between tests, to wipe my data so I can start fresh, I've been using sfdisk to make one partition for the full size of the drive. Also, I'm using USB memory sticks at this point for testing. I'll generally be using that for drives or using SD cards.
The problem I'm finding is that the file structure is persistent on the partitions on the target drive. (All this paragraph is about ONLY the target drive.) If I divide it up into 2 partitions (as I need to use, eventually), I find that /boot, the small 1st partition, still has all the files from previous usage of the partition. If I've tried to wipe the information by making only one big partition on the drive, I still see only, in that one partition, the original files for the /boot partition. If I split it into 2 partitions, the locations are going to be the same as when I normally make a Raspbian image and I find the files in both /boot and the system drive are still there.
So repartitioning, with the partitions in the same location, leaves me with the files still intact from the previous incarnation of a partition in the same sectors.
I'd like to, for testing, just wipe out all the information so I start fresh with each test, but I do not want to just use dd and write gigabytes of 0s or 1s to the full drive to wipe out the data.
What can I do to make sure:
The partition table is wiped out between tests
Any directory structure or file information for the partitions is wiped out so there are no files still surviving on any partitions when I start my testing?
A "nice" thing about linux filesystems is that they are separate from partition tables. This has saved me in the past when partition tables have been accidentally deleted or corrupted - recreate the partition table and the filesystem is still there! For your use case, if you want the files to be "gone", you need to destroy the filesystem superblocks. Destroying just the first one is probably sufficient for your use case.
Using dd to overwrite just the first MB of each of your filesystems should get you what you need. So, if you're starting your first partition/FS on block 0, you could do something like
# write 1MB of zeros to wipe out /boot
dd if=/dev/zero of=/dev/path_to_your_device bs=1024 count=1024
That ought to wipe out the /boot file system. From there you'll need to calculate the start of your root volume and you can use skip as per https://superuser.com/questions/380717/how-to-output-file-from-the-specified-offset-but-not-dd-bs-1-skip-n to write a meg of zeros at the start of your root filesystem.
Alternately, if /boot is small, you can just write sizeof(/boot)+1MB (assuming you start /root immediately after /boot) and it'll overwrite the primary superblock from /root too while saving you some calculations.
Note that the alternate superblocks will still exist, so at some point if you (or someone) wanted to get back what was there previously then recovery of alternate superblocks might be possible, except that whatever files were present in that first 1MB of disk would be corrupt due to overwrite.

Increase the root volume (Hard disk) of EC2 Linux running instance without restart - Step by step process

Problem:
I have EC2 instance with Linux (Ubunty) and root volume of 10 GB. I have consumed about 96% of the size and now my application responding slow, so I wanted to increase the size to 50 GB.
The most important point is, I have data already there and many applications are running on this EC2 instance and I don't want to disturb or stop them.
To check the current space available ~$ df -hT
Please use ~$ lsblk command to check the partition size
Here is the solution:
Take a snapshot of your volume which contains valuable data.
Increase the EBS volume using Elastic Volumes
After increasing the size, extend the volume's file system manually.
Details
1. Snapshot Process (AWS Reference)
1) Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
2) Choose Snapshots under Elastic Block Store in the navigation pane.
3) Choose Create Snapshot.
4) For Select resource type, choose Volume.
5) For Volume, select the volume.
6) (Optional) Enter a description for the snapshot.
7) (Optional) Choose Add Tag to add tags to your snapshot. For each tag, provide a tag key and a tag value.
8) Choose Create Snapshot.
2) Increase the EBS volume using Elastic Volumes (AWS Reference)
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
Choose Volumes, select the volume to modify, and then choose Actions, Modify Volume.
The Modify Volume window displays the volume ID and the volume's current configuration, including type, size, IOPS, and throughput. Set new configuration values as follows:
To modify the type, choose a value for Volume Type.
To modify the size, enter a new value for Size.
To modify the IOPS, if the volume type is gp3, io1, or io2, enter a new value for IOPS.
To modify the throughput, if the volume type is gp3, enter a new value for Throughput.
After you have finished changing the volume settings, choose Modify. When prompted for confirmation, choose Yes.
Modifying volume size has no practical effect until you also extend the volume's file system to make use of the new storage capacity.
3) Extend the volume's file system manually (AWS Reference)
To check whether the volume has a partition that must be extended, use the lsblk command to display information block devices attached to your instance.
The root volume, /dev/nvme0n1, has a partition, /dev/nvme0n1p1. While the size of the root volume reflects the new size, 50 GB, the size of the partition reflects the original size, 10 GB, and must be extended before you can extend the file system.
The volume /dev/nvme1n1 has no partitions. The size of the volume reflects the new size, 40 GB.
For volumes that have a partition, such as the root volume shown in the previous step, use the growpart command to extend the partition. Notice that there is a space between the device name and the partition number.
~$ sudo growpart /dev/nvme0n1 1
To extend the file system on each volume, use the correct command for your file system. In my case, I have ext4 filesystem, I will use the resize2fs command.
~$ sudo resize2fs /dev/nvme0n1p1
Use lsblk to check the partition size.

Can glusterfs volume be created out of directory instead of partition?

I want to create volume using glusterfs. Can glusterfs volume be created out of directory instead of partition ?
Yes, that should work. I pretty much use it that way during development/testing:
gluster volume create testvol replica 3 myhost:/home/ravi/bricks/brick{1..6} force
Unless you want to use features like snapshot which require thinly provisioned lvms as partitions.
Might I also add that if you place multiple bricks of different distribute subvols on the same folder, things like df and quotas might not always work as intended.

Un-glusterize a disk?

How can I get back a bare disk (with the data) that was used in a simple 2 nodes replicas GlusterFS cluster ?
Would removing the .glusterfs directory be sufficient or are the files themselves somehow tied to GlusterFS?
In addition to removing the .glusterfs directory, you would also need to remove the various extended attributes which gluster sets on each of the files/directorys in the brick.

How can GlusterFS move thin disks between bricks?

In a thin provisioning environment, if a file grows bigger and bigger, how can GlusterFS move it between bricks?
The original bricks are full of space, but newly added bricks are nearly empty, How can DHT translator deals that situation?
After adding new brick(s) you can carry out rebalance operation to move files from one brick to other.
But, you won't be able to do this on a per file basis.
Check the below doc.:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html-single/administration_guide/#sect-Rebalancing_Volumes

Resources