I was going through the book The Linux Programming Interface. On page 73 in Chapter 4,
it is written:
fd = open("w.log", O_WRONLY | O_CREAT | O_TRUNC | O_APPEND, S_IRUSR | S_IWUSR);
I read that O_TRUC flag is used to truncate the file length to zero that destroys any existing data in the file.
O_APPEND flag is used to append data to the end of the file.
The kernel records a file offset, sometimes also called the read-write offset or pointer. This is the location in the file at which the next read() or write() will commence.
I am confused that if the file is truncated and the kernel does the subsequent writing at the end of the file, why is the append flag is needed to explicitly tell to append at the end of the file?
Without the append flag (if the file is truncated), the kernel writes at the end of the file for the subsequent write() function call.
O_APPEND flag is used to append data to the end of the file.
That's true, but incomplete enough to be potentially misleading. And I suspect that you are in fact confused in that regard.
The kernel records a file offset, sometimes also called the read-write offset or pointer. This is the location in the file at which the next read() or write() will commence.
That's also incomplete. There is a file offset associated with at least each seekable file. That is the position where the next read() will commence. It is where the next write() will commence if the file is not open in append mode, but in append mode every write happens at the end of the file, as if it were repositioned with lseek(fd, 0, SEEK_END) before each one. In that case, then, the current file offset might not be the position where the next write() will commence.
I am confused that if the file is truncated and the the kernel does the subsequent writing at the end of the file why the append flag is needed to explicitly tell to append at the end of the file ?
It is not needed to cause the first write (by any process) after truncation to occur at the end of the file because immediately after the file has been truncated there isn't any other position.
With out the append flag (if the file is truncated), the kernel writes at the end of the file for the subsequent write() function call.
It is not needed for subsequent writes either, as long as the file is not repositioned or externally modified. Otherwise, the location of the next write depends on whether the file is open in append mode or not.
In practice, it is not necessarily the case that every combination of flags is useful, but the combination of O_TRUNC and O_APPEND has observably different effect than does either flag without the other, and the combination is useful in certain situations.
O_APPEND rarely makes sense with O_TRUNC. I think no combination of the C fopen modes will produce that combination (on POSIX systems, where this is relevant).
O_APPEND ensures that every write is done at the end of the file, automatically, regardless of the write position. In particular, this means that if multiple processes are writing to the file, they do not stomp over each other's writes.
note that POSIX does not require the atomic behavior of O_APPEND. It requires that an automatic seek takes place to the (current) end of the file before the write, but it doesn't require that position to still be the end of of the file when the write occurs. Even on implementations which feature atomic O_APPEND, it might not work for all file systems. The Linux man page on open cautions that O_APPEND doesn't work atomically on NFS.
Now, if every process uses O_TRUNC when opening the file, it will be clobbering everything that every other process wrote. That conflicts with the idea that the processes shouldn't be clobbering each other's writes, for which O_APPEND was specified.
O_APPEND is not required for appending to a file by a single process which is understood to be the only writer. It is possible to just seek to the end and then start writing new data. Sometimes O_APPEND is used in the exclusive case anyway simply because it's a programming shortcut. We don't have to bother making an extra call to position to the end of the file. Compare:
FILE *f = fopen("file.txt", "a");
// check f and start writing
versus:
FILE *f = fopen("file.txt", "r+");
// check f
fseek(f, 0, SEEK_END); // go to the end, also check this for errors
// start writing
We can think about the idea that we have a group of processes using O_APPEND to a file, such that the first one also performs O_TRUNC to truncate it first. But it seems awkward to program this; it's not easy for a process to tell whether it is the first one to be opening the file.
If such a situation is required on, say, boot-up, where the old file from before the boot is irrelevant for some reason, just have a boot-time action (script or whatever) remove the old file before these multiple processes are started. Each one then uses O_CREAT to create the file if necessary (in case it is the first process) but without O_TRUNC (in case they are not the first process), and with O_APPEND to do the atomic (if available) appending thing.
The two are entirely independent. The file is simply opened with O_APPEND because it's a log file.
The author wants concurrent messages to concatenate instead of overwrite each other (e.g. if the program forks), and if an admin or log rotation tool truncates the file then new messages should start being written at line #1 instead of at line #1000000 where the last log entry was written. This would not happen without O_APPEND.
Related
I have a program which opens a static file with sys_open() and wants to receive a file descriptor equals to zero (=stdin). I have the ability to write to the file, remove it or modify it, so I tried to create a symbolic link to /dev/stdin from the static file name. It opens stdin, but returns with the lowest available fd (not equal to zero). How can I cause the syscall to return zero, without hooking the syscall or modifying the program itself? it that even possible?
(It's part of a challenge, not a real case scenario)
Thank you as always
Posix guarantees that the lowest available FD will be returned. Therefore you can just invoke the program with stdin closed:
./myprogram 0>&-
Let's say I have a big file , 1Go. I want to READ 10ko at offset 10, then WRITE 645ko at offset 235689, then READ 150Mo at offset 648975, and so on...
What is the best approach between these two:
Opening the file and mmap-ing it (which size?). Then do the reads/writes. At the end unmap and close it.
Or opening the file. On reads/writes, mmap-ing the file (which size?) and then unmamap-ing them. At the end close the file.
Doing mmap(1) on every I/O doesn't sound like the right thing - It would confuse the code reader and possibly the kernel's optimizations, and has no benefit.
You can use pread(1)/pwrite(1) or preadv(1)/pwritev(1) if you want to be explicit about your reads and writes.
If not, you can mmap(1) the entire file (but be sure to use the right flags, probably MAP_SHARED) - Linux won't try to load the entire file to memory anyway.
I'm a beginner in assembly (using nasm). I'm learning assembly through a college course.
I'm trying to understand the behavior of the sys_read linux system call when it's invoked. Specifically, sys_read stops when it reads a new line or line feed. According to what I've been taught, this is true. This online tutorial article also affirms the fact/claim.
When sys_read detects a linefeed, control returns to the program and the users input is located at the memory address you passed in ECX.
I checked the linux programmer's manual for the sys_read call (via "man 2 read"). It does not mention the behavior when it's supposed to, right?
read() attempts to read up to count bytes from file descriptor fd
into the buffer starting at buf.
On files that support seeking, the read operation commences at the
file offset, and the file offset is incremented by the number of bytes
read. If the file offset is at or past the end of file, no bytes are
read, and read() returns zero.
If count is zero, read() may detect the errors described below. In
the absence of any errors, or if read() does not check for errors, a
read() with a count of 0 returns zero and has no other effects.
If count is greater than SSIZE_MAX, the result is unspecified.
So my question really is, why does the behavior happen? Is it a specification in the linux kernel that this should happen or is it a consequence of something else?
It's because you're reading from a POSIX tty in canonical mode (where backspace works before you press return to "submit" the line; that's all handled by the kernel's tty driver). Look up POSIX tty semantics / stty / ioctl. If you ran ./a.out < input.txt, you wouldn't see this behaviour.
Note that read() on a TTY will return without a newline if you hit control-d (the EOF tty control-sequence).
Assuming that read() reads whole lines is ok for a toy program, but don't start assuming that in anything that needs to be robust, even if you've checked that you're reading from a TTY. I forget what happens if the user pastes multiple lines of text into a terminal emulator. Quite probably they all end up in a single read() buffer.
See also my answer on a question about small read()s leaving unread data on the terminal: if you type more characters on one line than the read() buffer size, you'll need at least one more read system call to clear out the input.
As you noted, the read(2) libc function is just a thin wrapper around sys_read. The answer to this question really has nothing to do with assembly language, and is the same for systems programming in C (or any other language).
Further reading:
stty(1) man page: where you can change which control character does what.
The TTY demystified: some history, and some diagrams showing how xterm, the kernel, and the process reading from the tty all interact. And stuff about session management, and signals.
https://en.wikipedia.org/wiki/POSIX_terminal_interface#Canonical_mode_processing and related parts of that article.
This is not an attribute of the read() system call, but rather a property of termios, the terminal driver. In the default configuration, termios buffers incoming characters (i.e. what you type) until you press Enter, after which the entire line is sent to the program reading from the terminal. This is for convenience so you can edit the line before sending it off.
As Peter Cordes already said, this behaviour is not present when reading from other kinds of files (like regular files) and can be turned off by configuring termios.
What the tutorial says is garbage, please disregard it.
Should write() implementations assume random-access, or can there be some assumptions, like that they'll ever be performed sequentially, and at increasing offsets?
You'll get extra points for a link to the part of a POSIX or SUS specification that describes the VFS interface.
Random, for certain. There's a reason why the read and write interfaces take both size and offset. You'll notice that there isn't a seek field in the fuse_operations struct; when a user program calls seek/lseek on a FUSE file, the offset in the kernel file descriptor is updated, but the FUSE fs isn't notified at all. Later reads and writes just start coming to you with a different offset, and you should be able to handle that. If something about your implementation makes it impossible, you should probably return -EIO on the writes you can't satisfy.
Unless there is something unusual about your FUSE filesystem that would prevent an existing file from being opened for write, your implementation of the write operation must support writes to any offset — an application can write to any location in a file by lseek()-ing around in the file while it's open, e.g.
fd = open("file", O_WRONLY);
lseek(fd, SEEK_SET, 100);
write(fd, ...);
lseek(fd, SEEK_SET, 0);
write(fd, ...);
I'm working on a Perl-based file synchronization tool. It downloads files into a temporary directory (which is guaranteed to be on the same filesystem as the real file) and then moves the temporary files into place over the old ones, preserving metadata like permissions, ownership, and ACLs. I'm wondering how to achieve that last step on Linux.
On Mac OS X, at least in C, I would use the exchangedata function. This takes two filenames as arguments and swaps their contents, leaving all metadata (besides mtime) intact. It guarantees that the operation is atomic—all readers will see either the old file or the new one, never something in between. Unfortunately, I don't think it's available on Linux.
I know that rename moves atomically, but it doesn't preserve metadata. On the other hand, I could open the file and overwrite the data with the contents of the new one, which would preserve all metadata but would not be an atomic operation. Any suggestions on tackling this problem?
The only approach I see here is to read the metadata from the file you are replacing, apply that to the temporary file, and then rename the temporary file over the old file. (rename preserves the source file attributes, obviously.)
Filesystem-specific, but...
The XFS_IOC_SWAPEXT ioctl swaps the extents of two file descriptors on XFS.
#include <xfs/xfs.h>
#include <xfs/xfs_dfrag.h>
xfs_swapext_t sx = {
...,
.sx_fdtarget = fd1,
.sx_fdtmp = fd2,
...
};
xfs_swapext(fd1, &sx);
See the sources to xfs_fsr for example usage.