Best POSIX way to determine if a filesystem is mounted read only - linux

If I have a POSIX system like Linux or Mac OS X, what's the best and most portable way to determine if a path is on a read-only filesystem? I can think of 4 ways off the top of my head:
open(2) a file with O_WRONLY - You would need to come up with a unique filename and also pass in O_CREAT and O_EXCL. If it fails and you have an errno of EROFS then you know it's a read-only filesystem. This would have the annoying side effect of actually creating a file you didn't care about, but you could unlink(2) it immediately after creating it.
statvfs(3) - One of the fields of the returned struct statvfs is f_flag, and one of the flags is ST_RDONLY for a read-only filesystem. However, the spec for statvfs(3) makes it clear that applications cannot depend on any of the fields containing valid information. It would seem there's a decent possibility ST_RDONLY might not be set for a read-only filesystem.
access(2) - If you know the mount point, you can use access(2) with the W_OK flag as long as you are running as a user who would have write access to the mountpoint. Ie, either you are root or it was mounted with your UID as a mount parameter. You would get a return value of -1 and an errno of EROFS.
Parsing /etc/mtab or /proc/mounts - Doesn't seem portable. Mac OS X seems to have neither of these, for example. Even if the system did have /etc/mtab I'm not sure the fields are consistent between OSes or if the mount options for read-only (ro on Linux) are portable.
Are there other ways I'm missing? If you needed to know if a filesystem was mounted read-only, how would you do it?

You could also popen the command mount and examine the output looking for your file system and seeing if it held the text " (ro,".
But again, that's not necessarily portable.
My option would be to not worry about whether the file system was mounted read only at all. Just try and create your file and, if it fails, tell the user what the error was. And, of course, give them the option of saving it somewhere else.
You really have to do that sort of thing anyway since, in any scenario where there's even a small gap between testing and doing, you may find the situation changes (probably not to the extent of making an entire file system read only but, who knows, maybe there is (or will be in the future) a file system that allows this).

utime(path, NULL);
If you have write perms, then that will give you ROFS or -- if permitted -- simply update the mtime on the directory, which is basically harmless.

Related

Linux: How to prevent a file backed memory mapping from causing access errors (SIGBUS etc.)?

I want to write a wrapper for memory mapped file io, that either fails to map a file or returns a mapping that is valid until it is unmapped. With plain mmap, problems arise, if the underlying file is truncated or deleted while being mapped, for example. According to the linux man page of mmap SIGBUS is received if memory beyond the new end of the file is accessed after a truncate. It is no option to catch this signal and handle the error this way.
My idea was to create a copy of the file and map the copy. On a cow capable file system, this would impose little overhead.
But the problem is: how do I protect the copy from being manipulated by another process? A tempfile is no real option, because in theory a malicious process could still mutate it. I know that there are file locks on Linux, but as far as I understood they're either optional or don't prevent others from deleting the file.
I'm asking for two kinds of answers: Either a way to mmap a file in a rock solid way or a mechanism to protect a tempfile fully from other processes. But maybe my whole way of approaching the problem is wrong, so feel free to suggest radical solutions ;)
You can't prevent a skilled and determined user from intentionally shooting themselves in the foot. Just take reasonable precautions so it doesn't happen accidentally.
Most programs assume the input file won't change and that's usually fine
Programs that want to process files shared with cooperative programs use file locking
Programs that want a private file will create a temp file, snapshot or otherwise -- and if they unlink it for auto-cleanup, it's also inaccessible via the fs
Programs that want to protect their data from all regular user actions will run as a dedicated system account, in which case chmod is protection enough.
Anyone with access to the same account (or root) can interfere with the program with a simple kill -BUS, chmod/truncate, or any of the fancier foot-guns like copying and patching the binary, cloning its FDs, or attaching a debugger. If that's what they want to do, it's not your place to stop them.

Does Linux need a writeable file system

Does Linux need a writeable file system to function correctly? I'm just running a very simple init programme. Presently I'm not mounting any partitions. The Kernel has mounted the root partition as read-only. Is Linux designed to be able run with just a read-only file system as long as I stick to mallocs, readlines and text to standard out (puts), or does Linux require a writeable file system in-order even to perform standard text input and output?
I ask because I seem to be getting kernel panics and complaints about the stack. I'm not trying to run a useful system at the moment. I already have a useful system on another partition. I'm trying to keep it as simple as possible so as I can fully understand things before adding in an extra layer of complexity.
I'm running a fairly standard x86-64 desktop.
No, writable file system is not required. It is theoretically possible to run GNU/Linux with the only read-only file system.
In practice you probably want to mount /proc, /sys, /dev, possibly /dev/pts to everything work properly. Note that even some bash commands requires writable /tmp. Some other programs - writable /var.
You always can mount /tmp and /var as ramdisk.
Yes and No. No it doesn't need to be writeable if it did almost nothing useful.
Yes, you're running a desktop so it's needed to be writeable.
Many processes actually need a writeable filesystem as many system calls can create files. e.g. Unix Domain Sockets can create files.
Also many applications write into /var, and /tmp
The way to get around this is to mount the filesystem read/only and use a filesystem overlay to overlay an in memory filesystem. That way, the path will be writable but they go to ram and any changes are thrown away on reboot.
See: overlayroot
No it's not required. For example as most distributions have a live version of Linux for booting up for a cd or usb disk with actually using and back end hdd.
Also on normal installations, the root partitions are changed to read-only when there are corruptions on the disk. This way the system still comes up as read-only partition.
You need to capture the vmcore and the stack trace of the panic form the dmesg output to analyse further.

Linux file deleted recovery

Is there a way to create a file in Linux that link to a specific iNode?
Take this scenario: There is a file that is in course of writing (a log maybe) and the specific file is deleted but a link in the dir /proc is still pointing at it. In this case we need not a bare copy of it but an hard link to it so we can have the future modifications and the most last modification before the process close and the system delete it.
If we have the iNode number is there a way to achieve this goal?
Since there is no Syscall that involves iNode, because is a concept of extX fs and is not a good practice make a stove pipe but it is to make a chain of responsability (as M.E.L. suggests), there is only a NO answer for this question because at VFS level we handle files path and names and not other internal representations.
BUT to achieve the goal to track the most last modification we can use a continous monitoring and duplication with tail:
tail -c+1 -f --pid=PID /proc/PID/fd/FD > /path/to/the/copy
where PID is the pid of the process that have the deleted file still opened and FD is its file descriptor number. With -f tail open and hold the file to display further modification, with -c+1 start to "tail" from the first byte and with --pid=PID tail is informed to exit when the pid exit.
You can use lsof to recover deleted files (sometimes)...
> lsof | grep testing.txt
less 4607 juliet 4r REG 254,4 21
8880214 /home/juliet/testing.txt (deleted)
Be sure to read the original article for full details before attempting this, unless you're a Maveric like me.
> ls -l /proc/4607/fd/4
lr-x------ 1 juliet juliet 64 Apr 7 03:19
/proc/4607/fd/4 -> /home/juliet/testing.txt (deleted)
> cp /proc/4607/fd/4 testing.txt.bk
http://www.linuxplanet.com/linuxplanet/tips/6767/1
Enjoy
It's always difficult to answer a question like "can I do" confidently in the negative. But as far as I see, neither /sys/ nor /proc provide a mapping of open files descriptors that are not symlinks. I assume by "BUT a link in the dir /proc is still pointing at it" you mean that the /proc//fd/ entries look like symlinks? I'm almost sure you cannot recover the original file.
I take that back: As user user2676075 pointed out, copying does work. Just hardlinking doesn't ...
UPDATE: If you think about it, it's quite logical.
/proc and /sys are file systems different from your hard disk. So they can't provide file like directory entries which one could hardlink to a destination on the hard disk.
The /proc/*/fd/ entries pretend to be symlinks, but actually they are different, else the copying would not work. I think they pretend to be symlinks to provide meaningful information with 'ln -l'.
Regarding the (missing) capability to hardlink to some inode (let's say with some system call): This cannot be part of the kernel or the VFS-Interface, for the following reasons:
It would violate the integrity of the file system. The filesystem is not supposed to keep the disk blocks of files that are completely deleted around in the same manner as files that persist.
The inodes might be a completely virtual concept to identify a "slot where a datastream is stored'. I assume there can be implementations that would have a problem converting a slot that has no reference back to a slot which is refered to by a name in the file system.
I admit the case against the possibility of such a system call is not water tight. But given the current state of the VFS interface (which AFAIR doesn't provide for such a call), it would be a heavy burden for any file system implementation (including e.g. distributed file systems) to provide a call to link a file into a directory by inode.
ATM I wonder if calling fstat before and after deleting the last reference is actually requires to return the same inode information ...
t

Mandatory file lock on linux

On Linux I can dd a file on my hard drive and delete it in Nautilus while the dd is still going on.
Can Linux enforce a mandatory file lock to protect R/W?
[EDIT] The original question wasn't about linux file locking capabilities but about a supposed bug in linux, reproducing it here as it is responded below and others may have the same question.
People keep telling me Linux/Unix is better OS. I am coding Java on Linux now and come across a problem, that I can easily reproduce: I can dd a file on my hard drive and delete it in Nautilus while the dd is still going on. How come linux cannot enforce a mandatory file lock to protect R/W??
To do mandatory locking on Linux, the filesystem must be mounted with the -o mand option, and you must set g-x,g+s permissions on the file. That is, you must disable group execute, and enable setgid. Once this is performed, all access will either block or error with EAGAIN based on the value of O_NONBLOCK on the file descriptor. But beware: "The implementation of mandatory locking in all known versions of Linux is subject to race conditions which render it unreliable... It is therefore inadvisable to rely on mandatory locking." See fcntl(2).
You don't need locking. This is not a bug but a choice, your assumptions are wrong.
The file system uses reference counting and it will mark a file as free only when all hard links to the file are removed and all file descriptors are closed.
This approach allows safe file system operations that Windows, for example, doesn't. Operations like delete, move and rename over files in use without needing locking or breaking anything.
Your dd operation is going to succeed despite the file removal, which will actually be deferred till the dd finishes.
http://en.wikipedia.org/wiki/Reference_counting#Disk_operating_systems
[EDIT] My response doesn't make much sense now as the question was edited by someone else. The original question was about a supposed bug in linux and not about if linux can lock a file:
People keep telling me Linux/Unix is better OS. I am coding Java on Linux now and come across a problem, that I can easily reproduce: I can dd a file on my hard drive and delete it in Nautilus while the dd is still going on. How come linux cannot enforce a mandatory file lock to protect R/W??
Linux and Unix OS's can enforce file locks, but it does not do so by default becuase of its multiuser design. Try reading the manual pages for flock and fcntl. That might get you started.

Why can't files be manipulated by inode?

Why is it that you cannot access a file when you only know its inode, without searching for a file that links to that inode? A hard link to the file contains nothing but a name and a number telling you where to find the inode with all the real information about the file. I was surprised when I was told that there was no usermode way to use the inode number directly to open a file.
This seems like such a harmless and useful capability for the system to provide. Why is it not provided?
Security reasons -- to access a file you need permission on the file AS WELL AS permission to search all the directories from the root needed to get at the file. If you could access a file by inode, you could bypass the checks on the containing directories.
This allows you to create a file that can be accessed by a set of users (or a set of groups) and not anyone else -- create directories that are only accessable by the the users (one dir per user), and then hard-link the file into all of those directories -- the file itself is accessable by anyone, but can only actually be accessed by someone who has search permissions on one of the directories it is linked into.
Some Operating Systems do have that facility. For example, OS X needs it to support the Carbon File Manager, and on Linux you can use debugfs. Of course, you can do it on any UNIX from the command-line via find -inum, but the real reason you can't access files by inode is that it isn't particularly useful. It does kindof circumvent file permissions, because if there's a file you can read in a folder you can't read or execute, then opening the inode lets you discover it.
The reason it isn't very useful is that you need to find an inode number via a *stat() call, at which point you already have the filename (or an open fd)...or you need to guess the inum.
In response to your comment: To "pass a file", you can use fd passing over AF_LOCAL sockets by means of SCM_RIGHTS (see man 7 unix).
Btrfs does have an ioctl for that (BTRFS_IOC_INO_PATHS added in this patch), however it does no attempt to check permissions along the path, and is simply reserved to root.
Surely if you've already looked up a file via a path, you shouldn't have to do it again and again?
stat(f,&s); i=open(f,O_MODE);
involves two trawls through a directory structure. This wastes CPU cycles with unnecessary string operations. Yes, the well-designed file system cache will hide most of this inefficiency from a casual end-user, but repeating work for no reason is ugly if not plain silly.

Resources