Setup:
Virtual Machine: VMware Fusion with CentOS 7.4.1708 with NFS Server config:
"/dev/ServerPath" 10.20.0.104(rw,fsid=0,sync,crossmnt,no_subtree_check,all_squash,anonuid=1111,anongid=1111)
Local Latest OSX:
Mount:
sudo mount -t nfs -o resvport,rw 10.20.0.136:/dev/LocalPath /Users/USERNAME/dev/ServerPath
Everything is working great except at opening the Project (Directory) in PhpStorm, each ~500ms it (re)indexes and a loading bar shows this operation (Updating Indices). Except of danger of epileptic seizure I am afraid about the HDD writing operations on SSD and therefore I wanted the ask the Community if such Issue can be fixed and how? The Synchronisation Setting was disabled. Maybe has this something with the way the NFS is exported/mounted?
PhpStorm mentions:
"External file changes sync may be slow: Project files cannot be watched (are they under network mount?)"
Any Tips are appreciated, thank you in advance!
As far I could tell, the problem is not with the NFS Mount or the Infrastructural issue but how PhpStorm renew it's Indexes. One quick but short living fix is to invalidate the Indices and Cache by going to:
File > Invalidate Caches / Restart
After that, there is no more quick indexing of Directories and till some unknown change, the Filesystem is handled properly by PhpStorm.
Related
I have Cywgin running on a windows VM and I'm having problems keeping a stable SSHD service running.
The issue is that file permissions on the /etc/ssh_host_ecdsa_key, /$home/.ssh etc directories are being randomly reset so SSH connections are refused because I am using strict mode. When I stop the SSHD service, reset all the file and folder permissions to 700, restart SSHD, SSH connections work fine until they apparently randomly stop and sure enough all the relevant sshd directory and file permissions are reset.
Has anyone encountered this problem before and know a possible solution?
In case somebody is researching a similar issue, the problem was due to instability on the VMs C drive after hibernating, logging off etc.
The solution was instead of installing Cywgin directly on the VM, install Cywgin a separate networked drive mounted on the VM. Works perfectly and no more instability issues.
My Issue
I am having trouble removing MongoDB warnings about Transparent Huge Pages (THP) on an OVH CentOS 7 installation, and the issue appears to be the inability to write to /sys/kernel/mm as root.
First, I realize the OVH kernel is customized, and I know many of you will say to go with a fresh non-customized kernel, but that's not an option right now. I need to solve this problem for the current OS.
MongoDB Warnings:
2016-03-09T00:31:45.889-0500 W CONTROL [initandlisten] Failed to probe "/sys/kernel/mm/transparent_hugepage": Permission denied
2016-03-09T00:31:45.889-0500 W CONTROL [initandlisten] Failed to probe "/sys/kernel/mm/transparent_hugepage": Permission denied
MongoDB is trying to read the transparent_hugepage files (below), but they do not exist:
/sys/kernel/mm/transparent_hugepage/enabled
/sys/kernel/mm/transparent_hugepage/defrag
Cannot Create the Files
All of the solutions I've seen involve creating the files and populating them with never, including the script in the MongoDB documentation. In all of the solutions, this is the key part:
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
However, the files do not exist, and I cannot create anything under /sys/kernel/mm as root.
root#myhost [~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled
-bash: /sys/kernel/mm/transparent_hugepage/enabled: No such file or directory
root#myhost [~]# mkdir -p /sys/kernel/mm/transparent_hugepage
mkdir: cannot create directory ‘/sys/kernel/mm/transparent_hugepage’: Operation not permitted
The owner and group of directory /sys/kernel/mm are root, and I have temporarily changed the permissions from 700 to 777, yet I still cannot create the directory as root.
Tuned Profile Also Doesn't Help
To be thorough, I have also created the custom Tuned profile (per instructions in MongoDB link above) and activated it, but it generates the error WARNING tuned.plugins.plugin_vm: Option 'transparent_hugepages' is not supported on current hardware.
Tuned Profile (/etc/tuned/no-thp/tuned.conf):
[main]
include=virtual-guest
[vm]
transparent_hugepages=never
Error in Tuned log:
WARNING tuned.plugins.plugin_vm: Option 'transparent_hugepages' is not supported on current hardware.
Some Solution in MongoDB Itself?
It seems like the best solution would be to somehow explicitly configure MongoDB not to use THP so that it wouldn't have to check for the missing files, but I've seen nothing like this. If there is a way, even if it involves customizing MongoDB (and repeating after every update), I'm willing to do it.
Right now I've installed CentOS 7 on OVH. They use /boot/bzImage-3.14.32-xxxx-grs-ipv6-64 that implements grsecurity (https://grsecurity.net) which precludes access to some folders.
The very simple solution to the warnings from MongoDB about huge pages can be solved by replacing the kernel. The procedure for CentOS7 is as follows:
Download required kernel from OVH ftp: ftp://ftp.ovh.net/made-in-ovh/bzImage2 into /boot folder.
Edit /etc/grub2.cfg:
# linux /boot/bzImage-3.14.32-xxxx-grs-ipv6-64 root=/dev/md1 ro net.ifnames=0
linux /boot/bzImage-4.8.17-xxxx-std-ipv6-64 root=/dev/md1 ro net.ifnames=0
Here I replaced bzImage-3.14.32-xxxx-grs-ipv6-64 default by bzImage-4.8.17-xxxx-std-ipv6-64 without grs.
Now, reboot and check if the new kernel is ok:
root#ns506846 ~]# uname -r
4.8.17-xxxx-std-ipv6-64
I have the following linux environment configuration
Machine 1: Samba server
[share]
comment = Data
path = /share
force create mode = 0777
force directory mode = 0777
force user = root
force group = root
writeable = Yes
read only = No
guest ok = Yes
Machine 2: mount point to machine 1 share folder, using autofs service. auto.app file content :
/store -fstype=cifs,cache=none,forcedirectio,noac ://machine1/share
Machine 3: mount point to machine 1 share folder, using autofs service. auto.app file content :
/store -fstype=cifs,cache=none,forcedirectio,noac ://machine1/share
The problem that I'm facing is that if i'm updating a file under /store folder on machine 2, it will take a couple of seconds(~5 seconds) for the changes to become available on /store folder under machine 3. I want the changes to become available right away on machine 3. I don't care about any performance implications.
It looks like a caching problem to me, but i couldn't find a way to disable this so far. What i've tried so far was to pass the cache=none,forcedirectio,noac parameters but no success.
Any ideas?
Thanks
I know it's late, but on RHEL 5.8, we got the caching disabled on a system level by echo 0 > /proc/fs/cifs/LookupCacheEnabled.
LookupCacheEnabled file has the CIFS coniguration for the no of seconds to wait before refreshing the cache. By setting the value to 0, you will disable the cache. Hope it helps someone.
The way I've solved the caching issue was to drop samba and install nfs
I am trying to setup a file/DLNA server on raspberry pi (raspbian wheezy) for the files to be shared by all the devices I use - android and Linux to the minimum.
I have a USB drive with some decent storage where I have all my files. So far, I had NFS and dlna serving the USB drive contents.
Recently, I installed owncloud. It required the owncloud data directory to be owned by www-data. I have mounted (from fstab) the USB drive with options rw,user,uid=33,gid=33,mask=007. The owncloud worked fine (though it is very slow to render the contents).
My nfs exports is as follows:
/owncloud_data/mystuff *(rw,all_squash,anonuid=33,anongid=33,no_subtree_check)
My shomount -e localhost displays the following:
Export list for localhost:
/owncloud_data/mystuff (everyone)
However, when I issue
sudo mount localhost:/owncloud_data/mystuff /my_nfs
I get the following error:
mount.nfs: access denied by server while mounting localhost:/owncloud_data/mystuff
I don't understand why. I kind of guess that this is because the /owncloud_data/mystuff is owned by the www-data. But, the nfs-server is run as root; should it not be able to read the data? Or am I missing anything in this regard? I dont get any useful logs in the /var/log/messages; I tried including the --debug all option in the nfs config.
I haven't started with the dlna yet (I have installed minidlna which was working with NFS before I installed the owncloud).
OR, is there a better solution for what I am trying to do?
Please let me know if you need more information in this regard.
Thanks
I wont tick this as an answer. It is a work around.
The problem is if I export the /owncloud_data/mystuff the nfs mount is not working. If I export all /owncloud_data, it is working fine (along with the export options I have mentioned in the original post). I just mount /owncloud_data/mystuff on the client side (though technically I can mount /owncloud_data there).
I will be happy if anybody can explain this behaviour and solve to export /owncloud_data/mystuff.
Agenda: To have an common Project Folder between Linux and Windows
I have changed my document root from : /var/www/html to /media/mithun/Projects/test in my ubuntu machine 14.04
I get error as :
Forbidden
You don't have permission to access / on this server.
Apache/2.4.7 (Ubuntu) Server at localhost Port 80
So i added some scripts to : sudo gedit /etc/apache2/sites-available/000-default.conf
# DocumentRoot /var/www/html
DocumentRoot /media/mithun/Projects/test
But Document Root /var/www/test works but not with Windows NTFS Partition Drive.
Even after referring to :
Error message "Forbidden You don't have permission to access / on this server"
Issue with my Ubuntu Apache Conf file. (Forbidden You don't have permission to access / on this server.)
No success :( So kindly assist me with it...
Note: Projects is an New Volume (Internal Drive: In Windows its E:/ Drive)
#Lmwangi - Please check my updates for your reference below:
Output of : ls /etc/apparmor.d/
abstractions lightdm-guest-session usr.bin.evince usr.sbin.cupsd
cache local usr.bin.firefox usr.sbin.mysqld
disable sbin.dhclient usr.lib.telepathy usr.sbin.rsyslogd
force-complain tunables usr.sbin.cups-browsed usr.sbin.tcpdump
I tried killing apparmor:
sudo /etc/init.d/apparmor kill
I receive output as : Usage: /etc/init.d/apparmor
{start|stop|restart|reload|force-reload|status|recache}
After this, i was also able to restart apache successfully
maybe the problem is simple : is your new root directory accessible to the www-data user ?
Try :
$ chown -R www-data:www-data /media/mithun/Projects
As you have you have discovered by now, you cannot just manipulate permissions on an NTFS partition (using tools like chmod)
However, you can try forcing a given owner/permissions for the entire partition when you mount it.
Now the wayto do this, depends on the NTFS-utilities you are actually using (and which i don't know, so I'm assuming you are using ntfs-3g)
E.g. mount the partition with the following parameters (replace dev/sdX with your actual partition, and /path/to/wheredrive/is/mounted` with your target path):
mount -o gid=www-data /dev/sdX /path/where/the/drive/is/mounted
should make all the files on the partition belong to the www-data group.
If the filesystem sets the group ownership explicitely, this still might not work.
In this case, you might need to setup a usermap, that maps your windows users/groups (as found on the partition) to your linux users/groups.
The ntfs-3g.usermap utility will help you generate an initial usermap file, which you can then edit to your needs:
ntfs-3g.usermap /dev/sdX
Then pass the usermap to the mount options:
mount -o usermapping=/path/to/usermap.file /dev/sdX /path/where/the/drive/is/mounted
I suspect that you have apparmor enforcing rules that prevent Apache from reading non-whitelisted directory paths. I suggest that you
Edit the apparmor config for Apache to access your custom path. You'll need to hunt around /etc/apparmor.d/ . You may also find that using apparmor in non-enforcing mode helpful.
$ sudo aa-complain /etc/apparmor.d/*
Use mod_apparmor? See this
Or disable apparmor completely. See this
My order of preference would be 1,3,2. That should fix this for you :)
While using ubuntu with windows I faced same issue and it is resolved by remounting drive with read and write access. Below command will help you to do that:
sudo mount -o remount,rw /disk/location /disk/new_location
If it is still not working then in windows os, go to the power options and disable fast startup.
When you shut down a computer with Fast Startup enabled, Windows locks down the Windows hard disk. You won’t be able to access it from other operating systems if you have your computer configured to dual-boot. Even worse, if you boot into another OS and then access or change anything on the hard disk (or partition) that the hibernating Windows installation uses, it can cause corruption. If you’re dual booting, it’s best not to use Fast Startup or Hibernation at all.
Original article: https://www.howtogeek.com/243901/the-pros-and-cons-of-windows-10s-fast-startup-mode/