"Just in time" read only filesystem using mkfifo and inotifywait - linux

I am writing some gross middleware - basically, I have some old code that needs to open 100,000 files for reading only, expecting them all to be in one folder. It never writes. It is multiprocess so it can try to open ~30 files at the same time. The old way, I would have to actually copy the files into that folder (or use links, NFS, etc.). Worth noting I have no ability to change this old code - its just a binary.
I have some new, fancy code that can retrieve a file almost instantly. I want to tie these things together, so when the old code tries to open the file, it is actually, in real time, running the new code.
So I thought of mkfifo and inotifywait. Instead of a folder of 100,000 files, I can make a folder of 100,000 named pipes. So far so good. The legacy code goes to open the files, not knowing that they are indeed named pipes. The problem is, I don't know what order the legacy code is going to open the files (nice, right?). So I would like to TRIGGER the named pipe WRITE (from my fancy new code) when the legacy code goes in for the read. I can't spawn 100,000 writes and have them all block. So I thought hey - inotifywait makes sense. Every time the legacy goes to open the pipe, it triggers a read event, which can then be used to spawn the pipe writer in the background. The problem is.. inotifywait doesn't trigger the read event until AFTER the writer has been spawned!
Any ideas of how to solve this? Basically - I want to intercept a file open, block for a couple hundred ms while I retrieve the contents of the file, then return that contents. Ideally I don't have to create a custom FUSE filesystem to do this.. its just a read-only file open. The problem is this needs to run fast and in parallel.. and I don't know which files are going to be opened in what order. Gotta be a quick and dirty way!
Thanks in advance for everyone's time.

Related

Running script twice at time

I'm making a little simple script to improve the efficienty of my work team.
The script simply searches a file that the user gives as param.
./check_file test_file.xml
I used only ls and cp commands and there's no log or temporary files.
My question is: should I put a .lock file to be sure that the script runs only once at time or can I avoid this control?
Usually I create a lock file, because my scripts write temporary files and if two users run at the same moment the script, it explodes.
Thanks!
Generally speaking, no. I would recommend avoiding temporary files as much as possible, preferring pipes instead. However, I doubt it's always possible to avoid temporary files, so when I have to, I use $$ in the filename (current process ID or PID). So if you're using /tmp/check_file.tmp as your temporary filename, instead use /tmp/check_file.$$.tmp - then two processes can run at a time, each with their own PID, and not overlap.
Slightly more advanced is to also use ${TMP:-/tmp} as the temporary directory instead of just /tmp - that way users can specify a different directory for each run, and thereby also avoid any overlaps.

How do I monitor changes of files and only look at them when the changes are finished?

I'm currently monitoring files in node.js using fs.watch. The problem I have is for example, let say I copy a 1gig file into a folder I'm watching. The moment the file starts copying I get a notification about the file. If I start reading it immediately I end up with bad data since the file has not finished copying. For example a zip file has it's table of contents at the end but I'd end up reading it before it's table of contents has been written.
Off the top of my head I could setup some task to call fs.stat on the file every N seconds and only try to read it when the stats stop changing. That would work but it seems not ideal as I'd like my app to be as responsive as possible and calling stat on a bunch of files every second seems heavy as well as calling stat every 5 or 10 seconds seems unresponsive.
Is there some more robust way to get notified when a file has finished being modified?
So I did a project last year which required doing "file watching". There is a better library out there than fs.watch. Check out npm chokidar.
https://www.npmjs.com/package/chokidar
Underneath it uses fs.watch, but wraps better improvements around it.
There is a property called awaitWriteFinish. Really it's doing some polling on the file to determine whether or not the file is finished writing. I used it and it really works great.
Setting this property will allow you to work against that file, always ensuring that the file has been completely written. And you don't need to go off and implement your own method of determining if the file is complete. Should save a bunch of time.
Aside from that, I don't believe you can really get away from polling with regard to determining if a file is finished writing. Chokidar is still polling, it's just that you don't need to write the logic to do it. And you can configure the polling interval if CPU utilization is deemed to be too high.
Edit: Would also like to add, to just give it a shot and see how it works. I get you want it as responsive as possible... But having something working is better than having something not working at all. It might be that even with a polling solution it's not even an issue for you. If it's deemed a performance problem, then go address it at that time and seek a "better" solution.

"find" command cannot detect files added during execution

Stackoverflow has saved my life on countless occasions over the years. Now, it's time for me to post my first question ever, the answer to which I have been unable to find so far.
I have a tool (language/implementation is irrelevant) which accepts a text file as input. This text file (let's call it file_list.txt) contains a long list of file paths, one per line. The tool then iterates over the lines in file_list.txt and does something with every file path. This needs to be done continuously and file_list.txt needs to always contain the latest file paths because users continuously upload or delete files from the share being monitored. To achieve this, I have set up a cron job which calls a script. First the script calls the find utility with the search parameters required and pipes the output to a temporary file. When the file is fully populated, it is moved to file_list.txt. Then, once this is done, the tool is invoked with file_list.txt as an input parameter.
So far, so good. The share being monitored is VERY LARGE (~60 TB) and the find command takes around 5 hours to execute. This is not a problem since we have multiple overlapping find commands running in parallel (triggered once per hour). The entire setup runs on a compute farm, so CPU utilization, etc. is also not an issue.
The problem arises in the lag time for file detection. Ideally, I want a user to add a file and I want one of the already running, overlapping find commands to detect this file within a matter of minutes. However, I have noticed that none of the already-running find commands will detect this file. Only a find command started AFTER this file was added will detect it. This means that generally, I need to wait around 5 hours for a newly added file to be detected. This leads me to believe that the find utility somehow acts on a "cached" version of the share state when it was triggered. Is this true? Can anyone confirm this? And if so, what can I do to improve the detection lag?
Please let me know if further clarificaion is required. I am happy to provide any further details.
To summarize: you have a gigantic filesystem volume (60 TB) which contains a huge number of files, and you use find(1) to name a large number of those files and put those names into a text file for analysis. You have discovered that files are not listed if they are created after find(1) was started but before it finished.
I think the best solution is to stop thinking of this as a batch job, and do it "online" using inotify(7). You can use the inotify API to be immediately informed of changes to your filesystem, including new files being created. There is of course the original C API, as well as the excellent pyinotify.
With inotify, you can start a watcher program once and leave it running continuously (under a supervisor if needed for restarts). The operating system can then notify you whenever a relevant filesystem event occurs, and you can respond immediately rather than waiting for the next scan.
The one downside for your use case might be that the watcher program does need to run on a machine which has the filesystem mounted locally. But the overall compute resources required are probably much less than your current approach of repeated linear scans.
executing find commands and piping the output to temporary files might work up to a certain scale, but is far from optimal. If you want a less resource intensive, more reactive solution, I would recommend considering to reimplement your software using the inotify interface:
The inotify API provides a mechanism for monitoring filesystem events.
Inotify can be used to monitor individual files, or to monitor
directories. When a directory is monitored, inotify will return
events for the directory itself, and for files inside the directory.
So an event will be raised for each file change; or file being added.
Note that you can then keep an internal list of files up to date which only needs to be changed when you get a event.

fs.createWriteStream over several processes

How can I implement a system where multiple Node.js processes write to the same file with fs.createWriteStream, such that they don't overwrite data? It looks like the default setup for fs.createWriteStream is that the file is cleared out when that method is called. My goal is to clear out the file once, and then have all other subsequent writers only append data.
Should I use fs.createWriteStream and then fs.appendFile? Or is there a way to open up a stream for each process, not just for the first process to open the file?
Should I use fs.createWriteStream and then fs.appendFile?
you can use either.
with fs.createWriteStream you have to change the flag like this:
fs.createWriteStream('your_file',{
flags: 'a+', // default is 'w' (just 'a' might be enough here, i'm not sure)
})
this should create the file if it doesn't exist or open it with write access if it exists and set the pointer to end. (append mode)
How to use fs.appendFile should be clear and it does pretty much the same.
Now the problem with multiple processes accessing the same file. Obviously only one process can open the same file with write access at the same time.
Therefore you need to wait for the file to be released if another process has the write access. You will probably need a library for that.
this one for example: https://www.npmjs.com/package/lockup
or this one: https://github.com/Perennials/mutex-node
you can also find alot more here: https://www.npmjs.com/browse/keyword/lock
or here: https://www.npmjs.com/browse/keyword/mutex
I have not tried any of those libraries but the one I posted and several others on the list should do exactly what you need.
Writing on a single file from multiple processes, ensuring data integrity, it is a fairly complex operation that you can orchestrate using File locking.
However, you have two simpler approaches:
Writing on a temporary file for each process, and then concatenate
the files at the end of the operations.
Transmitting what you need to write to a dedicated, single process and delegate the writing execution to it. Keep in mind that sending messages among processes can be expensive.

How to detect that no one is writing to a file in Linux?

I am wondering, is there a simple way to tell whether another entity has a certain file open for writing? I don't have time to use iNotify continuously to wait for any current writer to finish writing. I need to do an intermittent check.
Thanks.
What exactly are you doing where you "don't have time to use iNotify continuously"? First, you should be using the IN_CLOSE_WRITE flag so that iNotify just make one notification when the file gets closed after being written. Using it continuously makes no sense. Second, if your timing is that critical, I'm thinking writing to a file isn't your ideal solution. Do you control the first writer? Do you have to worry about anything else writing to the file after the first writer closes it?
lsof LiSts Open Files. fuser also works similarly (File USER), by telling you which user is using the file.
See: http://www.refining-linux.org/archives/23/16-Introduction-to-lsof-and-fuser/
Since you seem to be wanting to use a library-style interface, and not system, see ofl-lib.c. (It's really just having removed everything but the main function from the ofl program itself.)
You can't do so easily in the general case, and even if you could, you cannot use the information in a non-racy manner (see caf's comment).
So I'd say, redesign your application so you do not need to know.

Resources