Rsync mainly saying 'file has vanished:' when copying files from Gluster 3.4 server to local - glusterfs

I'm trying to copy files inside the /mnt/temp-vol location which has a gluster 3.4 mounted. I have 2 gluster 3.4 servers. What will be the issues of getting these 'file has vanished' , 'ls: cannot read symbolic link' etc ? Is this related to some Gluster 3.4 bug or some thing related to rsync and ls kind of commands inside the gluster mounted area /mnt/temp-vol?
Help will be much appreciated

Related

How to fix problem with zfs mount after upgrade to 12.0-RELEASE?

So I had to upgrade my system from 11.1 to 12.0 and now the system does not load. Stop on error Trying mount root zfs - Error 2 unknown filesystem.
And I do not have an old kernel which was good and worked well.
So How to fix mount problem?
Had tried to boot with the old kernel, but after one of the tries to freebsd-update upgrade there left only new kernel.
Expected no problems after the upgrade.
Actual - cannot load the system with Error 2 - unknown filesystem
P.S.
Found that /boot/kernel folder does not contain opensolaris.ko module.
How to copy this module to /boot partition on the system from LiveCD (this file exist on LiveCD)
Considering you have a FreeBSD USB stick ready... you can import the pool into a live environment and then mount individual datasets manually.
Considering "zroot" is your pool name
# mount -urw /
# zpool import -fR /mnt zroot
# zfs mount zroot/ROOT/default
# zfs mount -a // in case you want datasets to mount
# cd /mnt
Now do whatever you want...
You can also rollback to the last working snapshot (if there is any)
In case, your system is encrypted, you need to decrypt it first.

PhpStorm (Re)Index NFS mounted Preject from VM

Setup:
Virtual Machine: VMware Fusion with CentOS 7.4.1708 with NFS Server config:
"/dev/ServerPath" 10.20.0.104(rw,fsid=0,sync,crossmnt,no_subtree_check,all_squash,anonuid=1111,anongid=1111)
Local Latest OSX:
Mount:
sudo mount -t nfs -o resvport,rw 10.20.0.136:/dev/LocalPath /Users/USERNAME/dev/ServerPath
Everything is working great except at opening the Project (Directory) in PhpStorm, each ~500ms it (re)indexes and a loading bar shows this operation (Updating Indices). Except of danger of epileptic seizure I am afraid about the HDD writing operations on SSD and therefore I wanted the ask the Community if such Issue can be fixed and how? The Synchronisation Setting was disabled. Maybe has this something with the way the NFS is exported/mounted?
PhpStorm mentions:
"External file changes sync may be slow: Project files cannot be watched (are they under network mount?)"
Any Tips are appreciated, thank you in advance!
As far I could tell, the problem is not with the NFS Mount or the Infrastructural issue but how PhpStorm renew it's Indexes. One quick but short living fix is to invalidate the Indices and Cache by going to:
File > Invalidate Caches / Restart
After that, there is no more quick indexing of Directories and till some unknown change, the Filesystem is handled properly by PhpStorm.

Error Mounting for ntfs partition in ubuntu 16.04 in terminal

hello.
i need to help. i want to mounting drive D in ubuntu 16.04.BUT
my partition is ntfs format. (Drive C & D)
I had Installed Windows 7 on my computer, but then I Deleted It and Installed Ubuntu 16.04, but i just repartition the drive C. and did not change the drive D partition.
means that i changed C partitioning and partitioned it for Ubuntu OS(like home & swap & root). partition of D is constant. so D partitions did not change.(D partition is NTFS)
partitioning for ubuntu in C
When Ubuntu installed, i wanted to open my D drive (ntfs) but get the following error:
this message show when i want to open drive
and when mounting in terminal give me this message:
`root#mjb:/home/mjb# mount -t "ntfs" /home
Mount is denied because the NTFS volume is already exclusively opened.
The volume may be already mounted, or another software may use it which
could be identified for example by the help of the 'fuser' command.`
and this:
sudo mount -t ntfs-3g /dev/sda5 /dummy
[sudo] password for mjb:
The disk contains an unclean file system (0, 0).
Metadata kept in Windows cache, refused to mount.
Failed to mount '/dev/sda5': Operation not permitted
The NTFS partition is in an unsafe state. Please resume and shutdown
Windows fully (no hibernation or fast restarting), or mount the volume
read-only with the 'ro' mount option.
I test this solution:
open Terminal
type this command sudo -mount -t ntfs -r /dev/sda5 and then enter
then the partition mounted but i have a new problem:
the partition is read only because i type in command -r
ubuntu told me in the error message that: you can mount partition read only.
my question is: does exist any command for mounting partition in the form of read/write.
Open Disks
Select the partition you are not able to mount then turn off automatic mounting options, unselect mount at startup & write ro after comma as shown in the image & now you should be able to mount the disk succesfully.
seems like your windows is locking your HD before shutting down.
This happens when you try to acess the HD that windows is installed on from another OS, because on shutdown, windows locks the acess to the HD because by doing this, it can gain some performance on resuming Windows the next time you boot it.
So, simply try rebooting your windows before going to linux, if you shutdown Windows and then turn your PC directly into any other SO you wont be able to acess the HD/partition Windows has acess to.
Try Shift+shutdown in windows, then boot to Ubuntu os. It will mount all drives

Glusterfs can not quota on non-existing directory

I am using glusterfs 3.7.6.
The Gluster Documentation says,
Note You can set the disk limit on the directory even if it is not
created. The disk limit is enforced immediately after creating that
directory.
But, when I try to quota on non-existing directory, it fails and shows below message.
$ gluster volume quota testVolume limit-usage /quota1 10MB
quota command failed : Failed to get trusted.gfid attribute on path /quota1. Reason : No such file or directory
please enter the path relative to the volume
Tested same thing on glusterfs 3.3.2 worked very well.
So I've looked up release note 3.5 through 3.7.1, but couldn't find anything about this.
Is glusterfs 3.7 doesn't support quota on non-existing directory?
Or just something wrong with me?

Cassandra moving data_file_firectories

Regarding the location of cassandra created data files and system files, I need to move the "commitlog_directory", "data_file_directories" and "saved_caches_directory" which have settings in the "cassandra.yaml" config file. It is currently at the default location "/var/lib/cassandra". The data is only some test data and of course the system generated keyspaces which are
dse_perf
dse_system
OpsCenter
system
system_traces
There are also the commitlog and saved_caches.db to move.
I am thinking of moving the keyspace directories with linux shell commands but I'm very unsure if they will become corrupt somehow. There is simply no space in the default drive and we need to move everything to the secondary and tertiary mounted drives.
Right now I'm in the process of moving all the files and resetting the yaml settings.
I have two questions -
Regarding the cassandra.yaml file, are there any other files besides this that are depended upon to have the location of the commitlog_directory and data_file_directories and saved_caches_directory, and their 'wrong location' will cause failure once I move all these files? I am also concerned the files (like the db files) inside the tables themselves have references to their own location and cause failure once they are moved.
If I just move the three settings commitlog_directory and data_file_directories and saved_caches_directory, will dse/cassandra actually create all the system keyspaces (system_traces, dse_perf, system, OpsCenter, dse_system), and the commitlof and the saved_caches.db, and will any other upstream config files be out of sync with that (same as first part of question 1)?
It is a very new installation so re installing would not be the end of the world but I realllly don't want to because we have kerberos and all kinds of other stuff on top of this cluster now.
This OS is ubuntu 14.0.4 and the DSE version is 4.7.
I just finished doing this. My instances are in AWS EC2 so your process may vary, but in essence:
create a new volume and attach it to the instance. my new device was
/dev/xvdg.
create new mount point sudo mkdir /new_data
format the new volume sudo mkfs -t ext4 /dev/xvdg
edit /etc/fstab so that your mount will survive reboots and add this
line /dev/xvdg /new_data ext4 defaults,nofail,nobootwait 0 2
mount the new volume sudo mount -a
make the new directories sudo mkdir -p
/new_data/lib/cassandra/commitlog
chown the ownership sudo chown -R cassandra:cassandra
/new_data/lib/cassandra
change cassandra.yaml to point to the new dirs
drain the node. if you're moving the data dir, copy over the data
from the old location to the new location. if you're moving
commitlog only, just restart cassandra.
I was able to move all the files and the commitlog as well. I changed the yaml and pointed it to where I wanted it to go. Remember to run the following command afterward -
chown -R cassandra:cassandra
And voila! Everything is reading/writing as it should. Cassandra is neato.

Resources