What updates mtime after writing to memory mapped files? - linux

I'm using XFS on Linux and have a memory mapped file to which I write once per second. I notice that the file mtime (shown by watch ls --full-time) changes periodically but irregularly. The gap between mtimes seems to be between 2 and 20 seconds but it is not consistent. There is very little else running on the system--in particular there's only one program of mine writing the file, plus one reading.
The same program writes much more frequently to some other mmapped files, and their mtime changes exactly once per 30 seconds.
I am not using msync() (which would update mtime when called).
My questions:
What updates mtime?
Is the update interval configurable?
Why do some mtimes get updated exactly once per 30 seconds but some files which I write less frequently have fresher (irregular but always less than 30 seconds old) mtimes?

When you mmap a file, you're basically sharing memory directly between your process and the kernel's page cache — the same cache that holds file data that's been read from disk, or is waiting to be written to disk. A page in the page cache that's different from what's on disk (because it's been written to) is referred to as "dirty".
There is a kernel thread that scans for dirty pages and writes them back to disk, under the control of several parameters. One important one is dirty_expire_centisecs. If any of the pages for a file have been dirty for longer than dirty_expire_centisecs then all of the dirty pages for that file will get written out. The default value is 3000 centisecs (30 seconds).
Another set of variables is dirty_writeback_centisecs, dirty_background_ratio, and dirty_ratio. dirty_writeback_centisecs controls how often the kernel thread checks for dirty pages, and defaults to 500 (5 seconds). If the percentage of dirty pages (as a fraction of the memory available for caching) is less than dirty_background_ratio then nothing happens; if it's more than dirty_background_ratio, then the kernel will start writing some pages to disk. Finally, if the percentage of dirty pages exceeds dirty_ratio, then any processes attempting to write will block until the amount of dirty data decreases. This ensures that the amount of unwritten data can't increase without bound; eventually, processes producing data faster than the disk can write it will have to slow down to match the disk's pace.
The question of how the mtime gets updated is related to the question of how the kernel knows that a page is dirty in the first place. In the case of mmap, the answer is that the kernel sets the pages of the mapping to read-only. That doesn't mean that you can't write them, but it means that the first time you do, it triggers an exception in the memory-management unit, which is handled by the kernel. The exception handler does (at least) four things:
Marks the page as dirty, so that it will get written back.
Updates the file mtime.
Marks the page as read-write, so that the write can succeed.
Jumps back to the instruction in your program that writes to the mmaped page, which succeeds this time.
So when you write data to a clean page, it causes an mtime update, but it also causes the page to become read-write, so that further writes don't cause an exception (or an mtime update)note 1. However, when the dirty page gets flushed to disk, it becomes clean, and also becomes "read-only" again, so that any further writes to it will trigger another eventual disk write, and also another mtime update.
So now, with a few assumptions, we can start to piece together the puzzle.
First, dirty_background_ratio and dirty_ratio are probably not coming into play. If the pace of your writes was fast enough to trigger background flushes, then most likely you would see the "irregular" behavior on all files.
Second, the difference between the "irregular" files and the "30 second" files is the page access pattern. I surmise that the "irregular" files are being written to in some sort of append-mode or circular-buffer fashion, such that you start writing to a new page every few seconds. Every time you dirty a previously untouched page, it triggers an mtime update. But for the files displaying the 30-second pattern, you only write to one page (perhaps they are one page or less in length). In that case, the mtime is updated on first write, and then not again until the file is flushed to disk by exceeding dirty_expire_centisecs, which is 30 seconds.
Note 1: This behavior is, technically, wrong. It's unpredictable, but the standards allow for some degree of unpredictability. But they do require that the mtime be sometime at or after the last write to a file, and at or before an msync (if any). In the case where a page is written to multiple times in the interval before it's flushed to disk, this isn't what happens — the mtime gets the timestamp of the first write. This has been discussed, but a patch that would have fixed it wasn't accepted. Therefore, when using mmap, mtimes can be in error. dirty_expire_centisecs sort of limits that error, but only partially, since other disk traffic might cause the flush to have to wait, extending the window for a write to bypass mtime even further.

Related

What is the performance impact of the touch command or its equivalent system call?

In a custom-developed NodeJS web server (running on Linux) that can dynamically generate thumbnail images, I want to cache these thumbnails on the filesystem and keep track of when they are actually used. If they haven't been used for a certain period of time (say, one year), I'd consider them "orphans" and delete them.
To this end, I considered to touch them each time they're requested from a client, so that I can use the modification time to check when they were last used.
I assume this would incur a significant performance hit on the web server in high-load situations, as it is an "unnecessary" filesystem write, while, apart from logging, most requests will only consist of reads.
Has anyone performed any benchmarks on how big an impact this might have and if it's worthwhile?
It's probably not great, and probably worth avoiding updating every time you open a file. That's the reason the relatime / noatime mount options were invented, to prevent the existing Unix access-time timestamp from being updated every time a file was opened.
Is your filesystem mounted with relatime? That updates atime at most once per day, when the file is opened (even for reading). The other mount option that's common on Linux is noatime: never update atime.
If you can't let the kernel do this for you without needing extra system calls, you might be better off making an fstat system call after opening the file and only touching it to update the mod time if the mod time is older than a day or week. (You're concerned about intervals of a year, so a week is fine.) i.e. manually implement the relatime logic, but for mod time.
Frequently accessed files will not need updates (and you're still making a total of one system call for them, plus a date-compare). Rarely accessed files will need another system call and a metadata write. If most of the accesses in your access pattern are to a smallish set of files repeatedly, this should be excellent.
Possible reasons for not being able to use atime could include:
The filesystem is mounted with noatime and it's not worth changing that
The files sometimes get read by something other than your web server / CGI setup. (e.g. a backup job that does more than compare size / timestamps)
Of course, the other option is to not update timestamps on use, and simply let a thumbnail be regenerated once a year after your weekly cron job deleted it. That might be ok depending on your workload.
If you manually touch some of the "hottest" thumbnails so you stagger their deletion, instead of having a big load spike this time next year, you could be ok. And/or have your deleter walk your filesystem very slowly, again so you don't have a big batch of frequently-needed thumbnails deleted at once.
You could come up with schemes like enabling mod-time updates in the week before the bi-annual cleanup, so thumbnails that should stay hot in cache get their modtimes updated. But probably better to just fstat / check / update all the time since that shouldn't be too much extra load.

File contents lost after power outage

I am using C++ ofstream to write a log file on Linux. When I monitor the file contents with tail -f command I can see the contents are correctly populated. But if a power outage happens and I check the file again after power cycle, the last couple lines of records are gone. With hexdump I can see those records turned into null characters '\0' instead. I tried flush() and manipulator std::endl and they don't help anyway.
Is it true what tail showed to me was not actually written to the disk and they were just in buffer? The inode table wasn't update before the power outage? I can accept this fact but I don't understand why the records turned to null characters if they weren't written to the file.
Btw, I tried Google's glog and have the same results (a bunch of null characters at the end). I also tried zlog, a C library. and found it only lost the last records but didn't replace them with null chars.
Well, when you have a power outage, and then start the system again, the linux kernel tries to forward the journal log to detect and correct the inconsistencies held from memory to disk when the system crashed. Normally this means to redo and commit all operations possible until the system crash, but undo (and erase) all data not commited on the time of the crash.
Linux (and other un*x kernels, like freebsd) has a facility called ordered data write, that forces metadata (like block pointers from inodes, or directory entries) to be updated after the actual data they point to is effectively written on disk, so inconsistencies reduce to a minimum. I don't know the actual linux implementation, but for example, in freebsd what you point (a block of zeros in a file instead of the actual data written) is completely impossible with freebsd kernel (well, you can do it on purpose, but not accidentally) The most probable thing is that linux probably just manages the blocks info and not the file contents, or it has updated the file size pointer and not the data up to there. This should not happen as it's an already solved problem.
The other thing is how many data you have written or why what you see on the screen doesn't appear after the system crash. Probably you have heard about something called delayed write that allows the kernel to save write operations to disk on busy systems by not writing immediately data onto disk, but waiting some time so updates can be resolved in core memory buffers before they go to disk. Disk writes, anyway, are forced after some time delay, that means 5secs in linux (I try to remember, there's a lot of time I checked that value last time, I'm in doubt between 5 and 30 seconds) so you can lose your last five seconds at most.

Can file size be used to detect a partial append?

I'm thinking about ways for my application to detect a partially-written record after a program or OS crash. Since records are only ever appended to a file (never overwritten), is a crash while writing guaranteed to yield a file size that is shorter than it should be? Is this guaranteed even if the file was opened in read-write mode instead of append mode, so long as writes are always at the end of the file? This would greatly simplify crash recovery, since comparing the last record's expected size and position with the actual file size would be enough to detect a partial write.
I understand that random-access writes can be reordered by the filesystem, but I'm having trouble finding information on whether this can happen when appending. I imagine an out-of-order append would require the filesystem to create a "hole" at the tail of the (sparse) file, write blocks beyond the hole, and then fill in the blocks in between, but I'm hoping that such an approach would be so inefficient that nobody would ever implement their filesystem that way.
I suppose another problem might be a filesystem updating the directory entry's file size field before appending the new blocks to to the file, and the OS crashing in between. Does this ever happen in practice? (ext4, perhaps?) Is there a quick way to detect it? (And what happens when trying to read the unwritten blocks that should exist according to the file's size?)
Is there anything else, such as write reordering performed by a disk/flash drive, that would get in the way of using file size as a way to detect a partial append? I don't expect to be able to compensate for this sort of drive trickery in my application, but it would be good to know about.
If you want to be SURE that you're never going to lose records, you need a consistent journaling or transactional system for your files.
There is absolutely no guarantee that a write will have been fulfilled unless you either set O_DIRECT [which you probably do not want to do], or you use markers to indicate aht "this has been fully committed", that are only written when the file is closed. You can either do that in the mainfile, or, for example, have a file that records, externally, "last written record". If you open & close that file, it should be safe as long as the APP is what is crashing - if the OS crashes [or is otherwise abruptly stopped - e.g. power cut, disk unplugged, etc], all bets are off.
Write reordering and write caching is/can be done at all levels - the C library, the OS, the filesystem module and the hard disk/controller itself are all ABLE to reorder writes.

Possible to implement journaling with a single fsync per commit?

Let's say you're building a journaling/write-ahead-logging storage system. Can you simply implement this by (for each transaction) appending the data (with write(2)), appending a commit marker, and then fsync-ing?
The scenario to consider is if you do a large set of writes to this log then fsync it, and there's a failure during the fsync. Are the inode direct/indirect block pointers flushed only after all data blocks are flushed, or are there no guarantees that blocks are being flushed in order? If the latter, then during recovery, if you see a commit marker at the end of the file, you can't trust that the data between it and the previous commit marker is meaningful. Thus you have to rely on another mechanism (involving at least another fsync) to determine what extent of the log file is consistent (e.g., writing/fsyncing the data, then writing/fsyncing the commit marker).
If it makes a difference, mainly wondering about ext3/ext4 as the context.
Note that linux's and mac os's fsync and fdatasync are incorrect by default. Windows is correct by default, but can emulate linux for benchmarking purposes.
Also, fdatasync issues multiple disk writes if you append to the end of a file, since it needs to update the file inode with the new length. If you want to have one write per commit, your best bet is to pre-allocate log space, store a CRC of the log entries in the commit marker, and issue a single fdatasync() at commit. That way, no matter how much the OS / hardware reorder behind your back, you can find a prefix of the log that actually hit disk.
If you want to use the log for durable commits or write ahead, things get harder, since you need to make sure that fsync actually works. Under Linux, you'll want to disable the disk write cache with hdparm, or mount the partition with barrier set to true. [Edit: I stand corrected, barrier doesn't seem to give the correct semantics. SATA and SCSI introduce a number of primitives, such as write barriers and native command queuing, that make it possible for operating systems to export primitives that enable write-ahead logging. From what I can tell from manpages and online, Linux only exposes these to filesystem developers, not to userspace.]
Paradoxically, disabling the disk write cache sometimes leads to better performance, since you get more control over write scheduling in user space; if the disk queues up a bunch of synchronous write requests, you end up exposing strange latency spikes to the application. Disabling write cache prevents this from happening.
Finally, real systems use group commit, and do < 1 sync write per commit with concurrent workloads.
There's no guarantee on the order in which blocks are flushed to disk. These days even the drive itself can re-order blocks on their way to the platters.
If you want to enforce ordering, you need to at least fdatasync() between the writes that you want ordered. All a sync promises is that when it returns, everything written before the sync has hit storage.

how does kernel handle new file creation

I wish to understand the way kernel works when a user/app tries to create a file in a directorty.
The background - We have a java applicaiton which consumes messages over JMS, processes it and then writes the XML to an outbound queue+a local directory. Yesterday we obeserved unsual delays in writing to the directory. On 'ls|wc -l' we found >300,000 files in there. Did a quick strace on the process and found it full of mutex calls (More than 3/4 calls in the strace were mutex).
So i thought that new file creation is taking time becasue the system has to every time check certain things (e.g name of files to make sure that the new file with a specific name can be created) amongst 300,000 files and then create a file.
I cleared the directory and the applicaiton resumed to normal service levels.
My questions
Was my analysis correct (It seems cuz the app started working fine after a clear down)?
More imporatant, how does the kernel work when you try to creat a new file in directory.
Can the abnormal number of mutex calls be attributed to the high number of files in the directory?
Many thanks
J
Please read about the Linux Filesystem, i-nodes and d-nodes.
http://en.wikipedia.org/wiki/Inode_pointer_structure
The file system is organized into fixed-sized blocks. If your directory is relatively small, it fits in the direct blocks and things are fast. If your directory is not too big, it fits in the direct blocks and some indirect blocks, and is still reasonably fast. If your directory becomes too big, it spills into double indirect blocks and becomes slow.
Actual sizes depend on file system and kernel configuration.
Rule of thumb is to keep the directory under 12 blocks, depending on your block size. Many systems use 8K blocks; a fast directory is under 98,304 bytes.
A file entry is something like 16*4 bytes in size (IIRC), so plan on no more than 1500 files per directory as a practical upper limit.
Directories with large numbers of entries are often slow - how slow depends on the underlying filesystem.
The common solution is to create a hierarchy of directories, so each dir only has a few hundred entries.
Mutex system calls are a result of the application (probably something in the JVM or the Java libraries) making mutex calls.
Synchronisation internal to the kernel you will not see via strace, as this only examines system calls themselves.
A directory with lots of files should not become inefficient if you are using a filesystem which uses directory indexes; most now do (ext3 does optionally but it's normally enabled nowadays).
Non-indexed directories (like those used on the bad old filesystems - ext2, vfat etc) get really bad with lots of files, and you'll see the "open" system call taking a lot longer.

Resources