Glusterfs can not quota on non-existing directory - glusterfs

I am using glusterfs 3.7.6.
The Gluster Documentation says,
Note You can set the disk limit on the directory even if it is not
created. The disk limit is enforced immediately after creating that
directory.
But, when I try to quota on non-existing directory, it fails and shows below message.
$ gluster volume quota testVolume limit-usage /quota1 10MB
quota command failed : Failed to get trusted.gfid attribute on path /quota1. Reason : No such file or directory
please enter the path relative to the volume
Tested same thing on glusterfs 3.3.2 worked very well.
So I've looked up release note 3.5 through 3.7.1, but couldn't find anything about this.
Is glusterfs 3.7 doesn't support quota on non-existing directory?
Or just something wrong with me?

Related

NiFi 1.10.0 - PutFile does not see the destination directory

We are facing a peculiar problem on one of our 2 environments. A PutFile processor throws the following error
PutFile[id=xxx] Penalizing StandardFlowFileRecord[uuid=xxx,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=xxx, container=default, section=1012], offset=94495, length=9778],offset=0,name=xxxxxxxxxxxxxxxx_PROD_20200513020001.json.gz,size=9778] and routing to 'failure' because the output directory /data/home/datadelivery/OUT/Test does not exist and Processor is configured not to create missing directories
After enabling the creation of missing directories, the error changes to:
Could not set create directory with permissions 664 because /data/home/datadelivery/OUT/Test: java.nio.file.AccessDeniedException: /data/home/datadelivery/OUT/TestPutFile[id=xxx...
Based on the error message one would think that it is an issue with file and folder permissions, however, the path /data/home/datadelivery/OUT/Test exists, and the nifi user can access and create files and folders in there as well (verified from the command line). The same folder permissions and ownership rights are configured on our DEV environment, where the PutFile processor works as expected. We could change the configuration to use a different location, but I'd rather find the root cause instead.
Where should I start debugging?
Thank you for your help in advance!
Kind regards, Julius
Strange issue, I would try to set full permission on the folder/file you want to write (ie chmod 777 + chown nifi:nifi + recursively), and see if the error is still there. If not it's kind of a start ...
Restarting the NiFi service solved the problem. The issue was that the Unix user (nifi) was modified months after starting the NiFi service. Most probably this was the reason the PutFile processor wasn't able to access a folder which the nifi unix user could.

How to fix problem with zfs mount after upgrade to 12.0-RELEASE?

So I had to upgrade my system from 11.1 to 12.0 and now the system does not load. Stop on error Trying mount root zfs - Error 2 unknown filesystem.
And I do not have an old kernel which was good and worked well.
So How to fix mount problem?
Had tried to boot with the old kernel, but after one of the tries to freebsd-update upgrade there left only new kernel.
Expected no problems after the upgrade.
Actual - cannot load the system with Error 2 - unknown filesystem
P.S.
Found that /boot/kernel folder does not contain opensolaris.ko module.
How to copy this module to /boot partition on the system from LiveCD (this file exist on LiveCD)
Considering you have a FreeBSD USB stick ready... you can import the pool into a live environment and then mount individual datasets manually.
Considering "zroot" is your pool name
# mount -urw /
# zpool import -fR /mnt zroot
# zfs mount zroot/ROOT/default
# zfs mount -a // in case you want datasets to mount
# cd /mnt
Now do whatever you want...
You can also rollback to the last working snapshot (if there is any)
In case, your system is encrypted, you need to decrypt it first.

get execution path without /proc mounted

I have a shared hosting provider who does not mount /proc for security reason.
I want to execute a binary file written in GO which needs the path in which it was started. This is done by using readlink and virtual link /proc/self/exe
(see source https://github.com/golang/go/blob/master/src/os/executable_procfs.go)
But this link can't be found due to the fact, that /proc is not mounted.
Arg[0] is not possible because you can call the file via "./app".
Is there a nother option to get execution path? Thanks for any help!

Kubernaties unable to mount NFS FS on Google Container Engine

I am following the basic nfs server tutorial here, however when I am trying to create the test busybox replication controler I get an error indicating that the mount has failed.
Can someone point out what am I doing wrong ?
MountVolume.SetUp failed for volume
"kubernetes.io/nfs/4e247b33-a82d-11e6-bd41-42010a840113-nfs"
(spec.Name: "nfs") pod "4e247b33-a82d-11e6-bd41-42010a840113" (UID:
"4e247b33-a82d-11e6-bd41-42010a840113") with: mount failed: exit
status 32 Mounting arguments: 10.63.243.192:/exports
/var/lib/kubelet/pods/4e247b33-a82d-11e6-bd41-42010a840113/volumes/kubernetes.io~nfs/nfs
nfs [] Output: mount: wrong fs type, bad option, bad superblock on
10.63.243.192:/exports, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a
/sbin/mount. helper program) In some cases useful info is found
in syslog - try dmesg | tail or so
I have tried using a ubuntu vm as well just to see if I can manage to mitigate a possible missble /sbin/mount.nfs dependency by running apt-get install nfs-common, but that too fails with the same error.
Which container image are you using? On 18th of October Google announce a new container image, which doesn't support NFS, yet. Since Kubernetes 1.4 this image (called gci) is the default. See also https://cloud.google.com/container-engine/docs/node-image-migration#known_limitations

Can't Write to /sys/kernel/ to disable Transparent Huge Pages (THP) for MongoDB on OVH CentOS 7

My Issue
I am having trouble removing MongoDB warnings about Transparent Huge Pages (THP) on an OVH CentOS 7 installation, and the issue appears to be the inability to write to /sys/kernel/mm as root.
First, I realize the OVH kernel is customized, and I know many of you will say to go with a fresh non-customized kernel, but that's not an option right now. I need to solve this problem for the current OS.
MongoDB Warnings:
2016-03-09T00:31:45.889-0500 W CONTROL [initandlisten] Failed to probe "/sys/kernel/mm/transparent_hugepage": Permission denied
2016-03-09T00:31:45.889-0500 W CONTROL [initandlisten] Failed to probe "/sys/kernel/mm/transparent_hugepage": Permission denied
MongoDB is trying to read the transparent_hugepage files (below), but they do not exist:
/sys/kernel/mm/transparent_hugepage/enabled
/sys/kernel/mm/transparent_hugepage/defrag
Cannot Create the Files
All of the solutions I've seen involve creating the files and populating them with never, including the script in the MongoDB documentation. In all of the solutions, this is the key part:
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
However, the files do not exist, and I cannot create anything under /sys/kernel/mm as root.
root#myhost [~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled
-bash: /sys/kernel/mm/transparent_hugepage/enabled: No such file or directory
root#myhost [~]# mkdir -p /sys/kernel/mm/transparent_hugepage
mkdir: cannot create directory ‘/sys/kernel/mm/transparent_hugepage’: Operation not permitted
The owner and group of directory /sys/kernel/mm are root, and I have temporarily changed the permissions from 700 to 777, yet I still cannot create the directory as root.
Tuned Profile Also Doesn't Help
To be thorough, I have also created the custom Tuned profile (per instructions in MongoDB link above) and activated it, but it generates the error WARNING tuned.plugins.plugin_vm: Option 'transparent_hugepages' is not supported on current hardware.
Tuned Profile (/etc/tuned/no-thp/tuned.conf):
[main]
include=virtual-guest
[vm]
transparent_hugepages=never
Error in Tuned log:
WARNING tuned.plugins.plugin_vm: Option 'transparent_hugepages' is not supported on current hardware.
Some Solution in MongoDB Itself?
It seems like the best solution would be to somehow explicitly configure MongoDB not to use THP so that it wouldn't have to check for the missing files, but I've seen nothing like this. If there is a way, even if it involves customizing MongoDB (and repeating after every update), I'm willing to do it.
Right now I've installed CentOS 7 on OVH. They use /boot/bzImage-3.14.32-xxxx-grs-ipv6-64 that implements grsecurity (https://grsecurity.net) which precludes access to some folders.
The very simple solution to the warnings from MongoDB about huge pages can be solved by replacing the kernel. The procedure for CentOS7 is as follows:
Download required kernel from OVH ftp: ftp://ftp.ovh.net/made-in-ovh/bzImage2 into /boot folder.
Edit /etc/grub2.cfg:
# linux /boot/bzImage-3.14.32-xxxx-grs-ipv6-64 root=/dev/md1 ro net.ifnames=0
linux /boot/bzImage-4.8.17-xxxx-std-ipv6-64 root=/dev/md1 ro net.ifnames=0
Here I replaced bzImage-3.14.32-xxxx-grs-ipv6-64 default by bzImage-4.8.17-xxxx-std-ipv6-64 without grs.
Now, reboot and check if the new kernel is ok:
root#ns506846 ~]# uname -r
4.8.17-xxxx-std-ipv6-64

Resources