Is there any way NFS client machine to know about changes made on remote file system by another client? How can it know - "Which new files are created, deleted or modified"? NFS share is mounted on Linux machine.
Related
I have inotify on my linux server. I looked up a whole lot of posting online on how to use inotify and found a sample c code that watches a directory for file create/delete. It worked fine on both local directory and nfs direcory(which is what i really need)
Now looking at opitons on how to make this a always running process i see there are the below options at least from what i understand
I guess try to run this c code with a wait and never close it?
incrond - which apparently is a daemon process. I dont seem to have it on my linux server i have rhel5 so i guess i need to install it. Not very clear on how the incrond would work.
inotify-tools - this sounds the easiest as it says i can just use commands in a shell script
I also have questions like what happens when the nfs mount is removed, server shuts down etc., would inotify know to pick up from where it left?!
I know this is a lot of questions but any pointers would help me a great deal. Thanks in advance. Meanwhile i will continue playing with the sameple c code.
I don't think that inotify(7) works reliably with network file systems (either NFS or CIFS).
It could work (on the local host) if the local host is modifying/writing some NFS mounted system.
It won't work (on the local host) if some remote client is modifying/writing some NFS mounted system (mounted by the local host).
Because the NFS protocol (at least those that I know, pre NFS4) is an RPC protocol, and there is no way for the remote NFS server (mounting that NFS system) to signal to distant clients that something is happenning.
For some time now I've been trying to send files to a Embedded Linux device via FTP without success. I even previously put a question in SO talking about my problem, and I still haven't got any further in solving it.
One thing I noticed, though, is that most FTP examples in the web includes a server-client relationship; the client connects itself to the server that is constantly listening in some IP-Port and the file transfer begins. Now when studying the examples using QNetworkAcessManager to send a file (generally to HTTP), they never mentioned the "other side requirements", what is leading me to believe I'm missing the necessary FTP server running in my Embedded Linux device.
So my question is more like a confirmation of my suspicions: if I want to transfer a file from my Desktop to my device using FTP, do I need to have a FTP server constantly running on that device? If yes, how that should change my code? For instance, should I abandon QNetworkAcessManager in favour of a QTcpClient usage? IOW what else should I know to make the file transfer system work using Qt? (In fact should I even bother myself with FTP at all instead of just using a normal QTcpServer?)
FTP is a protocol with 2 parties, the client and the server. Both must comply to the specification of FTP before file transfer can take place.
So yes there has to be a FTP deamon (the server) running the on the other device.
It doesn't have to run constantly just whenever you want to transfer files.
I am having issues with incrond IN_CREATE option. I am able to successfully monitor the directory created in a specific folder which are locally created. But incrond is not able to monitor NFS mounted directories.
I have added an cifs NFS mount /mnt/DIR which is added to incrontab. Do you know why it is not able to monitor changes under /mnt/DIR?.
Thanks,
It doesn't work, because there is no inotify support in NFS.
IOW, there is nothing in the NFS protocol which allows a client to specify some kind of interest in some file/directory, nor does the NFS protocol support the server pushing such notifications back to the client.
Our Symfony2 webapp uses the Assetic watcher in development mode to re-compile assets on the go.
The webapp runs in a Docker container which runs in a Vagrant VM (Ubuntu 12.04 Precise).
The host is OSX 10.9 Mavericks and it shares the code folder with the VM through a NFS (v3) share and the code is mounted in the container via a host/guest volume in Docker.
Since inotify seems to not be able to detect file modifications over NFSv3, the watcher works in polling mode which can be very slow (~1/2 minutes to detect the modification).
I've read that NFSv4 is inotify compliant but I did not found any good ressource on that.
Is there a way to make NFS/inotify works together?
Unfortunately, inotify cannot work on NFS. inotify works by hooking itself in the VFS (virtual filesystem) layer, in the kernel. Whenever a modification happens, inotify knows about it, because the modification happens on the same machine, therefore in the same kernel — which makes the whole thing possible.
With NFS, modifications happen on the server, and notifications are expected on the client. But the NFS doesn't notify the clients when a change is made. Otherwise, it wouldn't scale. NFS has been designed (and operated) to have thousands of clients on a single server. Imagine if you do a tiny change, and the server has to push it to all clients!
Of course, you could say "hey, there should be a subscription mechanism in the NFS protocol, so that clients can tell the server that they want to know about changes happening in a specific location". Well, NFS was designed 30 years ago, so forgive them for not including this subscription/notification system :-)
I'm not familiar with Assetic, but maybe you could have a custom script to watch for changes manually, and re-compile assets each time you detect a change. Just walk through the directory containing the source for the assets, keep track of the mtime of each file in an associative array, and each time you detect a new file (or a new mtime), recompile. Boom!
See also this other SO question about inotify and NFS.
Here is a plugin which aim to solve this: https://github.com/mhallin/vagrant-notify-forwarder
Just install it and reload your boxes to have inotify notifications forwarded to your guests machine:
vagrant plugin install vagrant-notify-forwarder
You might be interested in this tool called Guard it listens to the file changes made on host OS, and then on Guest it pulls and update those. This worked for me, and now my assets are updated almost instantaneously.
https://serverfault.com/questions/453826/vagrant-shared-folder-and-file-change-events
I used to use MAMP (or just a local Apache/PHP/MySQL stack) to work on web projects. I've since graduated to a live Ubuntu server which is much closer to the production environments for the sites I work on.
Now I'm trying to take this a step further to optimize my workflow. My goal is to have a Linux server running in VirtualBox that automounts a local folder share (from the host) and uses a symlink to gain access to the files (i.e. client:/var/www/dev is a symlink to host:/Users/charlie/dev/).
I don't want to keep my files stored on the virtual server if it can be avoided. I prefer having direct local access to the files and not having to wait for buffering issues between the host and the client. i.e., if I have several files that are located on the client open in my IDE and I close my laptop, as soon as I open it there's a bit of a buffer issue. My IDE has open project(s) that reference folders and files located on a network share that isn't yet available. In the few seconds it takes for the virtual machine to wake up, OSX is already reporting that the share can't be found and was disconnected, the IDE chokes up, etc.
So what am I asking? Well, is this safe / are there obvious pitfalls I'm not seeing / better ways to do this?
Edit: For anyone that stumbles upon this post, the final setup is a Linux virtual machine running in VirtualBox on a Mac with NFS and a symlink from my Apache web root to my mount.
I used NFS Manager (http://www.bresink.com/osx/NFSManager.html) to setup the NFS Server on my host computer with user mapping to my primary account. This ensures that when my VM mounts the NFS share it can do whatever it needs (reading, writing, modifying). Then I added this line to /etc/fstab on my VM to automount the share on boot: "123.456.89.1:/Users/charlie/nfs_share /mnt/nfs_share nfs" (where 123 is my host IP on the virtual NAT).
The result is a killer development environment where I can use Finder, Aptana (or whatever your editor of choice is) Photoshop, etc to work on files locally and simultaneously test them out in my "real" Apache/Lighttpd/MySQL/PHP environment!
I am using the exact same setup for accessing my documents folder between my Ubuntu host and the windows guest. Idem on my iMac. The only issues are when editing on the 2 platforms are the CR/LS, but that will be no issue on your setup.