Glusterfs how to balance bricks in single server? - glusterfs

My env: CentOS 7, GlusterFS 8
At first, I add 2 bricks to created a distribute volume.Later, I add a brcik to extended this volume.All operations were in single server.
At now, the usage of 3 bricks are 81% 83% 55%, I have tried gluster volume rebalance test-volume start,it was worked,but the bricks still not balance. How to solve it?

You should do type this command:
gluster volume rebalance test-volume fix-layout start
this will work now

Related

can we add a Geo replicated brick to the existing glusterfs volume which already has a normal replicated brick

I have a gluster volume in which presently I have one replicated brick already running.
Now I want to set up a geo-replicated brick, so for this do I need to create a new glusterfs volume and then adding a new brick which will be geo-replicated or I can use the existing glusterfs volume that and add a new brick to it with geo-replication to it??
geo-replication works between gluster volumes.
means:
source(master) -> gluster volume
destination(slave) -> gluster volume
volumes contains bricks- but as such you cannot say I want data from only one brick - everything you do is at volume level only.
When you say volume, it means volume already have bricks. (you cannot have volume with zero bricks).
Now, to answer your question:
- you can create and use a new volume as destination for geo-replication.
Usually, you use clean(empty) volume as your destination(slave) volume.
It is good idea to try out few things locally before actual setup.

GlusterFS - Why is it not recommended to use the root partition?

I am planning to set up a number of nodes to create a distributed-replicated volume using glusterfs
I created a gluster replicated volume on two nodes using a directory on the primary (and only) partition.
gluster volume create vol_dist-replica replica 2 transport tcp 10.99.0.3:/glusterfs/dist-replica 10.99.0.4:/glusterfs/dist-replica
This returned the following warning
volume create: vol_dist-replica: failed: The brick 10.99.0.3:/glusterfs/dist-replica is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.
So I used force on the end and the volume was created. I was then able to mount the gluster volume to a local directory.
My question is, why is it not recommended to use the root partition?
I can only think of the obvious reason that the system may never boot for xyz reason and therefore you'd lose the brick contents of one node.. But surely if you have enough nodes you should be able to recover from that?
here is an example of why not to do it:
volume remove-brick commit force: failed: Volume X does not exist
No volumes present in cluster
volume create: X: failed: /export/gv01/brick or a prefix of it is
already part of a volume
perfect loop I cannot escape.

Gluster remove-brick from volume failed, what to do to remove a brick?

I had a gluster volume named data of distributed mode. I added a brick server1:/vdata/bricks/data to the volume data, However I found that vdata/bricks/data is on the / disk of linux. I wanna remove the brick from volume. So I use gluster volume remove disk data server1:/vdata/bricks/data start. Then I check the status using gluster volume remove-brick data server1:/vdata/bricks/data status but found the status is failed, and the scanned files is always 0. So what could I do to remove this brick without lossing data?
I have found the reason myself. It is because of DNS resolution failure for some server nodes. After I recovered the DNS, everything works OK!

Is the limit to the gluster volume size I can create the node's available storage?

I have 3 nodes in a gluster cluster each with 300G of storage. If I have created only one volume on that cluster, can I write data worth 900G to the volume? The volume is in replica mode on 2 nodes, is is possible for me to write to the volume data that is more than each node's available storage?
You can use:
gluster volume quota name-volum enable
gluster volume quota name-volume limit-usage /mnt 10GB
Source Setting Limits and More Problems

How to change GlusterFS replica 2 to replica 3 with arbiter 1?

GlusterFS 3.7 introduced arbiter volume which it is a 3-way replication where the third brick is the arbiter.
How does one change from 2-way replication to 3-way replication with arbiter?
I could not find any documentation of changing running replica 2 volume to arbiter volume.
Reference:
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/
Just sent the patch http://review.gluster.org/#/c/14502/ to add this functionality. If everything goes well, it should make it to the 3.8 release.

Resources