I am planning to set up a number of nodes to create a distributed-replicated volume using glusterfs
I created a gluster replicated volume on two nodes using a directory on the primary (and only) partition.
gluster volume create vol_dist-replica replica 2 transport tcp 10.99.0.3:/glusterfs/dist-replica 10.99.0.4:/glusterfs/dist-replica
This returned the following warning
volume create: vol_dist-replica: failed: The brick 10.99.0.3:/glusterfs/dist-replica is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.
So I used force on the end and the volume was created. I was then able to mount the gluster volume to a local directory.
My question is, why is it not recommended to use the root partition?
I can only think of the obvious reason that the system may never boot for xyz reason and therefore you'd lose the brick contents of one node.. But surely if you have enough nodes you should be able to recover from that?
here is an example of why not to do it:
volume remove-brick commit force: failed: Volume X does not exist
No volumes present in cluster
volume create: X: failed: /export/gv01/brick or a prefix of it is
already part of a volume
perfect loop I cannot escape.
Related
I kicked off some Spark job on Kubernetes with quite big volume of data, and job failed that there is no enough space in /var/data/spark-xxx directory.
As the Spark documentation says on https://github.com/apache/spark/blob/master/docs/running-on-kubernetes.md
Spark uses temporary scratch space to spill data to disk during
shuffles and other operations. When using Kubernetes as the resource
manager the pods will be created with an emptyDir volume mounted for
each directory listed in SPARK_LOCAL_DIRS. If no directories are
explicitly specified then a default directory is created and
configured appropriately
Seems like /var/data/spark-xx directory is the default one for emptyDir. Thus, I tried to map that emptyDir to Volume (with bigger space) which is already mapped to Driver and Executors Pods.
I mapped it in the properties file and I can see that is mounted in the shell:
spark.kubernetes.driver.volumes.persistentVolumeClaim.checkvolume.mount.path=/checkpoint
spark.kubernetes.driver.volumes.persistentVolumeClaim.checkvolume.mount.readOnly=false
spark.kubernetes.driver.volumes.persistentVolumeClaim.checkvolume.options.claimName=sparkstorage
spark.kubernetes.executor.volumes.persistentVolumeClaim.checkvolume.mount.path=/checkpoint
spark.kubernetes.executor.volumes.persistentVolumeClaim.checkvolume.mount.readOnly=false
spark.kubernetes.executor.volumes.persistentVolumeClaim.checkvolume.options.claimName=sparkstorage
I am wondering if it's possible to mount emptyDir somehow on my persistent storage, so I can spill more data and avoid job failures?
I found that spark 3.0 has considered this problem and has completed the feature.
Spark supports using volumes to spill data during shuffles and other operations. To use a volume as local storage, the volume's name should starts with spark-local-dir-, for example:
--conf spark.kubernetes.driver.volumes.[VolumeType].spark-local-dir-[VolumeName].mount.path=<mount path>
--conf spark.kubernetes.driver.volumes.[VolumeType].spark-local-dir-[VolumeName].mount.readOnly=false
Reference:
https://issues.apache.org/jira/browse/SPARK-28042
https://github.com/apache/spark/pull/24879
I have a gluster volume in which presently I have one replicated brick already running.
Now I want to set up a geo-replicated brick, so for this do I need to create a new glusterfs volume and then adding a new brick which will be geo-replicated or I can use the existing glusterfs volume that and add a new brick to it with geo-replication to it??
geo-replication works between gluster volumes.
means:
source(master) -> gluster volume
destination(slave) -> gluster volume
volumes contains bricks- but as such you cannot say I want data from only one brick - everything you do is at volume level only.
When you say volume, it means volume already have bricks. (you cannot have volume with zero bricks).
Now, to answer your question:
- you can create and use a new volume as destination for geo-replication.
Usually, you use clean(empty) volume as your destination(slave) volume.
It is good idea to try out few things locally before actual setup.
I had a gluster volume named data of distributed mode. I added a brick server1:/vdata/bricks/data to the volume data, However I found that vdata/bricks/data is on the / disk of linux. I wanna remove the brick from volume. So I use gluster volume remove disk data server1:/vdata/bricks/data start. Then I check the status using gluster volume remove-brick data server1:/vdata/bricks/data status but found the status is failed, and the scanned files is always 0. So what could I do to remove this brick without lossing data?
I have found the reason myself. It is because of DNS resolution failure for some server nodes. After I recovered the DNS, everything works OK!
I use GlusterFS in high availability cluster. I need some functionality for getting replication status (replication completeness status). In other words I need to know that cluster now is in protected state (in terms of disk replication) and in the case of the master node failover all the data will not be lost.
I already tried gluster volume status, gluster peer status, but they only provide information about connection.
P.S.
For instance in drbd there was a command drbdadm status which provides information peer-disk:UpToDate (which means that replication process completed).
Is there any builtin GlusterFS function that can provide me with required information?
gluster volume heal <VOLNAME> info is the command for checking pending heals in replicate volumes. See https://gluster.readthedocs.io/en/latest/Troubleshooting/heal-info-and-split-brain-resolution/ for more info.
I am new to cassandra. In cassandra,in order to store cores we do specify the local directory of cassandra installed machine using the property data_file_directories in Cassandra.yalm configuration file. My need is to define the data_file_directories as network directory(something like 192..x.x.x/data/files/). I am using only single node cluster for rapid data write(For logging activities). As I don't rely on replication, My replication factor is 1.Any one help in defining network directory for cassandra data directory....
thanks in advance......
1) I have stored the data for the cassandra on amazons EBS volume (Network volume), But in EC2 case it is simple as we can mount the EBS volumes on a machine as if it is a local one.
2) In other cases you will have to use NFS to configure the network directory.I have never done this but it looks straight forword.
Cassandra is firmly designed around using local storage instead of EBS or other network-mounted data. This gives you better performance, better reliability, and better cost-effectiveness.