We have SAN storage and we are willing to go with GlusterFS beside it, is that possible
Can I use GlusterFS on top of SAN storage?
What is GlusterFS?
How it is different than NFS?
What are the use cases that force me to go with GlusterFS?
Related
What is the best way to migrate SMB, NFS, and DFS FROM Isilon TO NetApp CVO (Cloud Volume On Tap) in Azure?
I want to be able to migrate those file types individually, so they will not be going together as a mixture (though that may change). So far I am looking at:
SMB – EMCopy/Robocopy
NFS - rsync
DFS - I do not know yet. I need suggestions.
I would like the options for preserving and bringing over the permissions on the files, ACLs, etc. and anything else in that area along with the files.
I know this is a bit weird, but I'm building an application that makes small local changes to ephemeral file/folder systems and needs to sync them with a store of record. I am using NFS right now, but it is slow, not super scalable, and expensive. Instead, I'd love to take advantage of btrfs or zfs snapshotting for efficient syncing of snapshots of a small local filesystem, and push the snapshots into cloud storage.
I am running this application in Kubernetes (in GKE), which uses GCP VMs with ext4 formatted root partitions. This means that when I mount an emptyDir volume into my pods, the folder is on an ext4 filesystem I believe.
Is there an easy way to get an ephemeral volume mounted with a different filesystem that supports these fancy snapshotting operations?
No. Nor does GKE offer that kind of low level control anyway but the rest of this answer presumes you've managed to create a local mount of some kind. The easiest answer is a hostPath mount, however that requires you manually account for multiple similar pods on the same host so they don't collide. A new option is an ephemeral CSI volume combined with a CSI plugin that basically reimplements emptyDir. https://github.com/kubernetes-csi/csi-driver-host-path gets most of the way there but would 1) require more work for this use case and 2) is explicitly not supported for production use. Failing either of those, you can move the whole kubelet data directory onto another mount, though that might not accomplish what you are looking for.
How do you create shared space across nodes?
I have a designated drive that I would like to use but maintain the ability to add additional drives later
Let's assume you are just starting out and do not have any specific performance requirements. Then probably the easiest way to go would be to start an NFS server on the head node and export your dedicate drive as an NFS file share to the nodes. Your nodes would be able to mount this share over the network under the same mountpoint.
If your dedicated drives are spread across the cluster, the problem obviously gets trickier. After you have become comfortable with NFS, have a look at parallel file systems such as Gluster FS.
I have two VM running on Centos7. One is active and another one is passive server.
And created 200GB size LUN in SAN for common share path for both VMs. If I upload files on one server, then same can be seen on another one. Even it helps on failover case of single VM.
Can someone please share me how to setup this method ?.
You might want to use NFS Server so that your can freely share directories plus when one of your nfs client went down you can easily start another then share your files again.
I have setup small cluster of GlusterFS with 3+1 nodes.
They're all on the same LAN.
There are 3 servers and 1 laptop (via Wifi) that is also GlusterFS node.
A laptop often disconnects from the network. ;)
Use case I want to achieve is this:
I want my laptop to automatically synchronize with GlusterFS filesystem when it reconnects. (That's easy and done.)
But, when laptop is disconnected from cluster I still want to access filesystem "offline". Modify, add, remove files..
Obviously the only way I can access GlusterFS filesystem when it's offline from cluster, is accessing volume storage directly. The one I configured creating a gluster volume. I guess it's the brick.
Is it safe to modify files inside storage?
Will they be replicated to the cluster when the node re-connects?
There are multiple questions in your list:
First: Can I access GlusterFS when my system is not connected to it:
If you setup a GlusterFS daemon & brick on your system, mount this local daemon through gluster how you would usually do that and add a replication target also, you can access your brick through gluster as if it was not on your local system. The data will then be synchronized with the replication target once you re-connect your system to the network.
Second: Can I edit files in my brick directly:
Technically you can: You can just navigate to your brick and edit a file, however since gluster will not know what you changed, the changes will not be replicated and you will create a split brain situation. So it is certainly not advisable (so don't do that unless you want to change it manually in your replication brick also).
Tomasz, it is definitely not a good idea to directly manipulate the backend volume. Say you add a new file to the backend volume, glusterfs is not aware of this change and the file appears as spurious file when the parent directory is accessed via the glusterfs volume. I am not sure if glusterfs is ideal for your usecase