Permission denied when attempt to create/remove/rename files in write-only directory - linux

There are tons of posts on the net saying that write permission on directory allows affected user to create/remove/rename files in that directory, but I found that it actually could not be done without execute permission set. I tried calling open/fopen/remove/rename, but without exception, they all failed.
There should be something I missed or something I misunderstood. There are some explanation that operations on directory usually involves with file operation. If it was right, I wonder what operation is involved. If directory maintains mappings from filename to inode, it seems no reason that file operation is involved when renaming.
If unexpected file operation is involved, is it possible to manipulate directory directly, bypassing such operations?

You are right, the net is full of wrong things and especially for unix file permission rules. The first links I found with a common search engine were all wrong. Very interesting!
What file permissions on directories means:
x you have "access" to the files in this directory which means you can use them. If you do not have read access but access rights, you can work with a file which you know but you can't see!
mkdir one
touch one/x.h
chmod -r one
ls one // fails!
cat one/x.h // works!
and write permission is used for changing the content of the directory. So adding and removing files from the dir is only possible if you have write permission on that dir!
Because it is impossible to read/write/execute a file which is inside a dir which you can't access, you always need access rights for the dir to work with the files inside.
Sounds a bit strange, but is simply implemented that way.

Related

How to open or create a directory if doesn't exist atomically

Is there a way to open a directory and create it if doesn't exist atomically ?
My use case is simple, I use a directory to watch every public key that are allowed to connect to my server, but if the directory doesn't exist I want to create it, unfortunately for now I only see a two step solution, create the path with create_dir_all() and then open it with read_dir() but this create a possible situation where the directory is delete between the two calls (very unlucky in my docker container... but anyway !)
I didn't find any solution in linux to do that and I'm quite surprise cause that a very common operation for file.
I found a related question Create a directory and return a dirfd with `open` but it's focus on file descriptor and so is more specific. The answer seem to say this doesn't prevent race condition, I don't really understand the context, my only concern is to avoid to create the directory and than try to open it and it's fail. It's more for convenience and robustness of code.

How do I get the filename of an open std::fs::File in Rust?

I have an open std::fs::File, and I want to get it's filename, e.g. as a PathBuf. How do I do that?
The simple solution would be to just save the path used in the call to File::open. Unfortunately, this does not work for me. I am trying to write a program that reads log files, and the program that writes the logs keep changing the filenames as part of it's log rotation. So the file may very well have been renamed since it was opened. This is on Linux, so renaming open files is possible.
How do I get around this issue, and get the current filename of an open file?
On a typical Unix filesystem, a file may have multiple filenames at once, or even none at all. The file metadata is stored in an inode, which has a unique inode number, and this inode number can be linked from any number of directory entries. However, there are no reverse links from the inode back to the directory entries.
Given an open File object in Rust, you can get the inode number using the ino() method. If you know the directory the log file is in, you can use std::fs::read_dir() to iterate over all entries in that directory, and each entry will also have an ino() method, so you can find the one(s) matching your open file object. Of course this approach is subject to race conditions – the directory entry may already be gone again once you try to do anything with it.
On linux, files handles held by the current process can be found under /proc/self/fd. These look and act like symlinks to the original files (though I think they may technically be something else - perhaps someone who knows more can chip in).
You can therefore recover the (possibly changed) file name by constructing the correct path in /proc/self/fd using your file descriptor, and then following the symlink back to the filesystem.
This snippet shows the steps:
use std::fs::read_link;
use std::os::unix::io::AsRawFd;
use std::path::PathBuf;
// if f is your std::fs::File
// first construct the path to the symlink under /proc
let path_in_proc = PathBuf::from(format!("/proc/self/fd/{}", f.as_raw_fd()));
// ...and follow it back to the original file
let new_file_name = read_link(path_in_proc).unwrap();

permission of knowing a directory empty or not

When I tried to simulate the permission system under Linux, some strange things came about.
I created a directory 'main' by user 'normal', and created directory 'aha' which permission is 700 using root.
so the owner of 'main' is 'normal', if the permission is 755, I can delete 'aha' just using 'normal' user although its owner is root.
but when i put a file in 'aha', everything is changed. I can not remove 'aha' due to there's still a file in it.
so, my question is, since 'aha' is 700 by root, how can 'normal' know it's empty or not?
My further question is : what does read permission of a directory really mean?
Think of a UNIX directory as a drawer of index cards in the library catalog.
In order to know what books there are, you need read permission on the "drawer". In order to create or remove new "books", you need write permissions (which give you ability to put new cards, or remove existing cards from the drawer). In order to "traverse" the directory to a lower level "sub-drawer", you need execute permission on the drawer itself.
If you already know that book /foo/bar/baz exists, you don't need read permissions on /, /foo or /foo/bar, but you do need execute permissions on all of them.
A given book could be referenced by multiple "cards" in the same or separate "drawer" (that's hard links).
A "card" can reference another card (that's symlinks). Symlinks could became "dangling" (if the other card is removed).
When a book is not referenced by any card in any of the drawers, it "evaporates" from the library.
since 'aha' is 700 by root, how can 'normal' know it's empty or not
Well, one way is to try to remove it. If you succeed, it must have been empty. If it was not empty, "normal" can't find out anymore than that, since "normal" can't read the directory, and therefore can't find how many cards are in that "drawer", or what they are called.
Update:
why do you need execute permissions to traverse a directory.
Because that's the definition of the eXecutable bit for directories. Since you can't reasonably execute a directory, that bit would be wasted otherwise. No, the . and .. files have nothing to to do with the execute bit.
very basically a file/dir permission of 700 is not seven hundred but actually
owner = 7
group = 0
everyone = 0
the numbers pertain to a permission level
0 = no permission
1 = allow to execute (run file or access directory)
2 = allow to write (manipulate)
4 = allow to read (see)
you add the permissions levels up to assign more then one permission for example
$ chmod 754 foo
gives full access to the owner (1+2+4), execute 'n' read to the group (1+4), and read to everyone (4) look at
http://www.linuxclues.com/articles/16.htm
http://www.tuxfiles.org/linuxhelp/filepermissions.html
for more info

How can UNIX access control create compromise problems?

My system administrators advice me to be careful when setting access control to files and directories. He gave me an example and I got confused, here it is:
a file with protection mode 644 (octal) contained in a directory with protection mode 730.
so it means:
File: 110 100 100 (owner, group, other: rw- r-- r--)
Directory: 111 011 000 (owner, group, other: rwx -wx ---)
How can file be compromised in this case?
It depends on what you mean by 'compromise' and it depends on who belongs to the group.
The directory permissions are critical. Since members of the group can access the directory ('x') and can modify the directory ('w'), even though they cannot list the directory (no 'r'), it means that if a member of the group knows the name of the file, that person can also remove it because removing a file requires permission to write to the directory - the file permissions are immaterial (even though commands such as 'rm' let you know when you don't have write permission on the file, that is a courtesy, because it doesn't matter to the 'unlink()' system call).
So, a member of your group (or, more precisely, a member of the group to which the directory belongs) can remove the file if they know its name. They can also read the file if they know its name, and they can create a file of the same name if the original is already missing. It appears from the file permissions that being able to read the file is not compromise - you would have denied group read access (and public read access) if that mattered.
Note that although your group members cannot modify the file, because they can delete the file and create a new one with the same name, the result is basically the same as being able to modify the file. One key difference is that you'd know which user did the mischief because that user would own the file. (Well, someone with access to that user ID did the mischief.)
Since the directory can be written to, the file could simply be overwritten with another if the attacker is in the directory owner's group.

Implementing FUSE

I want to implement a file system using FUSE. When the contents of the directory is requested, only the types of files in that directory will be reported as subdirectories. For example, if there are ugur.PDF, guler.JPG and devatate.PNG files in the directory, the files themselves will not be reported, but types (PDF, JPG and PNG) will be reported instead. I tried to implement this file system. My problem is i can not imagine that how can i report without changing ls-l command? how does ls-l command work? (I have a readdir function and it loads file_type and file_name to buffer. I tried to change but i couldn't achive)
How does ls -l work? That seems to be the crux of the issue. It calls fstat(2). That system call fills the stat structure. When you implement a fuse provider, you are providing the data that ends up in the stat structure. If you want to change the directory structure, you return the necessary fabricated information for the various items.
I can think of two approaches to this:
1. Use a database like SQLite
you can store the files, path and file types in the database. Then when the user gets into some directory, you can say do some query like select file_types where path="" and populate as directories using filler()
2. Recursively traverse original path
you can create a list of all file types in the current directory, then use filler() to post them as directories. Then, when user enters any one directory, you can again do a check or something to see if cur_path is in last_path(orig directory), and you can select those file types and display them

Resources