Enable Direct I/O mode in GlusterFS - fuse

The GlusterFS server will ignore the O_DIRECT flag by default, how to make the server work in direct-io mode?
By mount -t glusterfs XXX:/testvol -o direct-io-mode=enable mountpoint, the GlusterFS client will work in direct-io mode, but the file will be cached in the hosted server.
How to solve this problem that both of the client and the server work in direct-io mode?

direct-io-mode=enable
enable performance.strict-o-direct
disable network.remote-dio
These should be sufficient if your application does page-aligned I/O which is mandatory for O_DIRECT.

Related

PhpStorm (Re)Index NFS mounted Preject from VM

Setup:
Virtual Machine: VMware Fusion with CentOS 7.4.1708 with NFS Server config:
"/dev/ServerPath" 10.20.0.104(rw,fsid=0,sync,crossmnt,no_subtree_check,all_squash,anonuid=1111,anongid=1111)
Local Latest OSX:
Mount:
sudo mount -t nfs -o resvport,rw 10.20.0.136:/dev/LocalPath /Users/USERNAME/dev/ServerPath
Everything is working great except at opening the Project (Directory) in PhpStorm, each ~500ms it (re)indexes and a loading bar shows this operation (Updating Indices). Except of danger of epileptic seizure I am afraid about the HDD writing operations on SSD and therefore I wanted the ask the Community if such Issue can be fixed and how? The Synchronisation Setting was disabled. Maybe has this something with the way the NFS is exported/mounted?
PhpStorm mentions:
"External file changes sync may be slow: Project files cannot be watched (are they under network mount?)"
Any Tips are appreciated, thank you in advance!
As far I could tell, the problem is not with the NFS Mount or the Infrastructural issue but how PhpStorm renew it's Indexes. One quick but short living fix is to invalidate the Indices and Cache by going to:
File > Invalidate Caches / Restart
After that, there is no more quick indexing of Directories and till some unknown change, the Filesystem is handled properly by PhpStorm.

Openshift disable build quota

I'm trying to use buildconfig/builds in OpenShift. The machine is a CentOS 7.3 with kernel 4.5.7-std-3
Unfortunately the kernel I'm using doesn't have CONFIG_CFS_BANDWIDTH enabled.
gunzip < /proc/config.gz | grep CFS
# CONFIG_CFS_BANDWIDTH is not set
Therefore every build I try instantly fails with:
error: failed to retrieve cgroup limits: cannot determine cgroup limits: open /sys/fs/cgroup/cpu/cpu.cfs_quota_us: no such file or directory
Is there a way to bypass this?
I already disabled the quotas inside the kubelet section on the node config file without success.
Unfortunately it's currently necessary to enable cgroups for builds to work properly, you can see additional discussion of this issue here:
https://github.com/openshift/origin/issues/8074

running wireshark from chroot jail

hi I want to run wireshark from inside a chroot jail.
but when i run it it gives following error:
WARNING: no socket to connect to
I have tried and search everywhere but no explanation so far.
Even if it does not work I want to understand why it is not working.
Wireshark uses GnuTLS to try to decrypt SSL/TLS connections.
Apparently GnuTLS uses gnome-keyring on some systems, and gnome-keyring is probably what's printing the messages.
My guess is that it's trying to connect to some daemon running on your machine over a UNIX-domain socket, but the chroot jail is preventing it from accessing the socket.
If that's not preventing Wireshark from running, just ignore the warning.
If it is preventing Wireshark from running ("I get a warning when I run X" is not the same as "X doesn't run"), you might not be able to run Wireshark in a chroot jail, unless there's some way to let gnome-keyring connect to that daemon from inside the jail.

You don't have permission to access / on this server ubuntu 14.04

Agenda: To have an common Project Folder between Linux and Windows
I have changed my document root from : /var/www/html to /media/mithun/Projects/test in my ubuntu machine 14.04
I get error as :
Forbidden
You don't have permission to access / on this server.
Apache/2.4.7 (Ubuntu) Server at localhost Port 80
So i added some scripts to : sudo gedit /etc/apache2/sites-available/000-default.conf
# DocumentRoot /var/www/html
DocumentRoot /media/mithun/Projects/test
But Document Root /var/www/test works but not with Windows NTFS Partition Drive.
Even after referring to :
Error message "Forbidden You don't have permission to access / on this server"
Issue with my Ubuntu Apache Conf file. (Forbidden You don't have permission to access / on this server.)
No success :( So kindly assist me with it...
Note: Projects is an New Volume (Internal Drive: In Windows its E:/ Drive)
#Lmwangi - Please check my updates for your reference below:
Output of : ls /etc/apparmor.d/
abstractions lightdm-guest-session usr.bin.evince usr.sbin.cupsd
cache local usr.bin.firefox usr.sbin.mysqld
disable sbin.dhclient usr.lib.telepathy usr.sbin.rsyslogd
force-complain tunables usr.sbin.cups-browsed usr.sbin.tcpdump
I tried killing apparmor:
sudo /etc/init.d/apparmor kill
I receive output as : Usage: /etc/init.d/apparmor
{start|stop|restart|reload|force-reload|status|recache}
After this, i was also able to restart apache successfully
maybe the problem is simple : is your new root directory accessible to the www-data user ?
Try :
$ chown -R www-data:www-data /media/mithun/Projects
As you have you have discovered by now, you cannot just manipulate permissions on an NTFS partition (using tools like chmod)
However, you can try forcing a given owner/permissions for the entire partition when you mount it.
Now the wayto do this, depends on the NTFS-utilities you are actually using (and which i don't know, so I'm assuming you are using ntfs-3g)
E.g. mount the partition with the following parameters (replace dev/sdX with your actual partition, and /path/to/wheredrive/is/mounted` with your target path):
mount -o gid=www-data /dev/sdX /path/where/the/drive/is/mounted
should make all the files on the partition belong to the www-data group.
If the filesystem sets the group ownership explicitely, this still might not work.
In this case, you might need to setup a usermap, that maps your windows users/groups (as found on the partition) to your linux users/groups.
The ntfs-3g.usermap utility will help you generate an initial usermap file, which you can then edit to your needs:
ntfs-3g.usermap /dev/sdX
Then pass the usermap to the mount options:
mount -o usermapping=/path/to/usermap.file /dev/sdX /path/where/the/drive/is/mounted
I suspect that you have apparmor enforcing rules that prevent Apache from reading non-whitelisted directory paths. I suggest that you
Edit the apparmor config for Apache to access your custom path. You'll need to hunt around /etc/apparmor.d/ . You may also find that using apparmor in non-enforcing mode helpful.
$ sudo aa-complain /etc/apparmor.d/*
Use mod_apparmor? See this
Or disable apparmor completely. See this
My order of preference would be 1,3,2. That should fix this for you :)
While using ubuntu with windows I faced same issue and it is resolved by remounting drive with read and write access. Below command will help you to do that:
sudo mount -o remount,rw /disk/location /disk/new_location
If it is still not working then in windows os, go to the power options and disable fast startup.
When you shut down a computer with Fast Startup enabled, Windows locks down the Windows hard disk. You won’t be able to access it from other operating systems if you have your computer configured to dual-boot. Even worse, if you boot into another OS and then access or change anything on the hard disk (or partition) that the hibernating Windows installation uses, it can cause corruption. If you’re dual booting, it’s best not to use Fast Startup or Hibernation at all.
Original article: https://www.howtogeek.com/243901/the-pros-and-cons-of-windows-10s-fast-startup-mode/

check status of ZFS pool on Linux host with Icinga monitoring system

I have a server which is used for backup storage. It's running ZFS on Linux, configured with a RAID z2 data pool and shared via Samba.
I need to monitor the ZFS filesystem to at least be able to see how much space is available.
I thought a simple check_disk plugin will do this job.
I'm able to execute the command from the icinga server cli:
sudo -u nagios /usr/lib/nagios/plugins/check_nrpe -H <hostname> -c check_disk -a 10% 20% /data/backups
DISK OK - free space: /data/backups 4596722 MB (30% inode=99%);| /data/backups=10355313MB;13456832;11961628;0;14952036
But the GUI shows the following error:
DISK CRITICAL - /data/backups is not accessible: No such file or directory
It works under the check_mk monitoring system, but we are migrating from check_mk right now.
I don't have any problems with checking other filesystems (root, boot) in Icinga on this machine.
I would appreciate any advice.
Thanks
Line is in /etc/icinga/objects/linux.cfg on the server:
check_command check_nrpe_1arg!check_backup
Line is in /etc/nagios/nrpe.cfg on the client
command[check_backup]=/usr/lib64/nagios/plugins/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$

Resources