Lowest permission level to see the content of a file? - security

How can I see the contents of a file with 111 permissions? A thing called Y-combinator, as an input, prints the content of a file. My instinct says that you can run it with 100 permissions. However, I know only the theory, not the practise.
Which is the lowest permission level to see a file with Y-combinator in Bash?
The user nobody_ comments:
You don't make any sense. The Y combinator is used to create recursive functions and has nothing to do with permissions.
A question arises:
Which is the lowest permission level to see a file in Bash?

You can't read the contents of a file with those permissions.
Permissions of '111' are 'execute only' and are almost useless on a regular file. In order for a file to be executed it needs at least read and execute by the owner, and in that case only the owner can read and execute it.
If you are worried about others reading your files you probably want to use '500' that would be read and execute for just you.
For more information and what these numbers mean (octal notation) you should read this page on Wikipedia:
http://en.wikipedia.org/wiki/File_system_permissions#Octal_notation
Cheers,
Darryl

To execute a file (script or otherwise), you need to be able to load its content into memory, hence to have read access.
So only leaving execution rights on your files is not going to allow anybody to read it. However, this is still a bad idea. Nothing that should not be executed should receive execution rights. In your position, I would be much more worried in accidentally executing a text file that begins with rm * than somebody using tricks to peek at my files.

I think you can't, and even the interpreter won't be able to (and therefore won't run it).
However, you shouldn't be worried about people seeing your code; if there are eg. security flaws, you should fix them instead.

Related

Meaning of the read permission for binary executable?

I am interested in the full impact of the read permission for binary executables. Indeed, I have encountered some behaviors that I wish to understand.
Let's say I have a C program that just call sleep(300). When the binary has the read permission, I am able to inspect the /proc/$PID folder associated with the running program. But when I removed this permission, I cannot access said folder : it does not exist.
Similarly, If I have a more clever program that copies str from one pointer to another, calling strace on this executable while yield better results if the binary is "readable". (For example, strace will show what every pointer points to)
Since strace relies on ptrace to analyze the running program internals, I don't understand the impact of the read permission. Indeed, I believe the read permission would only be relevant for statical analysis which rely on reading the binary.
Given the observed impact of the read permission, does that mean it is a good practice the remove the read permission of all the binaries on servers where security is critical?
It's certainly possible on Linux to have a binary with only execute permissions, as you've discovered. Doing this has the potential to cause problems with troubleshooting, as you've also discovered, because it makes the process harder to instrument.
I've certainly seen installations where the administrators have systematically removed read permissions from all their own binaries. I've sometimes felt that doing this has caused problems, although the installations where this kind of thing was done were so complex that it was difficult to be certain.
I guess you have to weigh up the benefit of a small increase in security, with a small decrease in serviceability. My experience is that, whatever the merits of removing read permissions, it doesn't seem to be a common practice in the Linux world.

Detecting if the code read the specified input file

I am writing some automated tests for testing code and providing feedback to the programmer.
One of the requirements is to detect if the code has successfully read the specified input file. If not - we need to provide feedback to the user accordingly. One way to detect this was atime timestamp, but since our server drive is mounted with relatime option - we are not getting atime updates for every file read. Changing this option to record every atime is not feasible as it slows down our I/O operations significantly.
Is there any other alternative that we can use to detect if the given code indeed reads the specified input file?
Here's a wild idea: intercept read call at some point. One of possible approaches goes more or less like this:
The program makes all its reading through an abstraction. For example, MyFileUtils.read(filename) (custom) instead of File.read(filename) (stdlib).
During normal operation, MyFileUtils simply delegates the work to File (or whatever system built-in libraries/calls you use).
But under test, MyFileUtils is replaced with a special test version which, along with the delegation, also reports usage to the framework.
Note that in some environments/languages it might be possible to inject code into File directly and the abstraction will not be needed.
I agree with Sergio: touching a file doesn't mean that it was read successfully. If you want to be really "sure"; those programs have to "send" some sort of indication back. And of course, there are many options to get that.
A pragmatic way could be: assuming that those programs under test create log files; your "test monitor" could check that the log files contain fixed entries such as "reading xyz PASSED" or something alike.
If your "code under test" doesn't create log files; maybe: consider changing that.

How to check if a file is opened in Linux?

The thing is, I want to track if a user tries to open a file on a shared account. I'm looking for any record/technique that helps me know if the concerned file is opened, at run time.
I want to create a script which monitors if the file is open, and if it is, I want it to send an alert to a particular email address. The file I'm thinking of is a regular file.
I tried using lsof | grep filename for checking if a file is open in gedit, but the command doesn't return anything.
Actually, I'm trying this for a pet project, and thus the question.
The command lsof -t filename shows the IDs of all processes that have the particular file opened. lsof -t filename | wc -w gives you the number of processes currently accessing the file.
The fact that a file has been read into an editor like gedit does not mean that the file is still open. The editor most likely opens the file, reads its contents and then closes the file. After you have edited the file you have the choice to overwrite the existing file or save as another file.
You could (in addition of other answers) use the Linux-specific inotify(7) facilities.
I am understanding that you want to track one (or a few) particular given file, with a fixed file path (actually a given i-node). E.g. you would want to track when /var/run/foobar is accessed or modified, and do something when that happens
In particular, you might want to install and use incrond(8) and configure it thru incrontab(5)
If you want to run a script when some given file (on a native local, e.g. Ext4, BTRS, ... but not NFS file system) is accessed or modified, use inotify incrond is exactly done for that purpose.
PS. AFAIK, inotify don't work well for remote network files, e.g. NFS filesystems (in particular when another NFS client machine is modifying a file).
If the files you are fond of are somehow source files, you might be interested by revision control systems (like git) or builder systems (like GNU make); in a certain way these tools are related to file modification.
You could also have the particular file system sits in some FUSE filesystem, and write your own FUSE daemon.
If you can restrict and modify the programs accessing the file, you might want to use advisory locking, e.g. flock(2), lockf(3).
Perhaps the data sitting in the file should be in some database (e.g. sqlite or a real DBMS like PostGreSQL ou MongoDB). ACID properties are important ....
Notice that the filesystem and the mount options may matter a lot.
You might want to use the stat(1) command.
It is difficult to help more without understanding the real use case and the motivation. You should avoid some XY problem
Probably, the workflow is wrong (having a shared file between several users able to write it), and you should approach the overall issue in some other way. For a pet project I would at least recommend using some advisory lock, and access & modify the information only thru your own programs (perhaps setuid) using flock (this excludes ordinary editors like gedit or commands like cat ...). However, your implicit use case seems to be well suited for a DBMS approach (a database does not have to contain a lot of data, it might be tiny), or some index locked file like GDBM library is handling.
Remember that on POSIX systems and Linux, several processes can access (and even modify) the same file simultaneously (unless you use some locking or synchronization).
Reading the Advanced Linux Programming book (freely available) would give you a broader picture (but it does not mention inotify which appeared aften the book was written).
You can use ls -lrt, it displays the last RW operations in the shell. Then you can conclude whether the file is opened or not. Make sure that you are in the exact directory.

Why can't files be manipulated by inode?

Why is it that you cannot access a file when you only know its inode, without searching for a file that links to that inode? A hard link to the file contains nothing but a name and a number telling you where to find the inode with all the real information about the file. I was surprised when I was told that there was no usermode way to use the inode number directly to open a file.
This seems like such a harmless and useful capability for the system to provide. Why is it not provided?
Security reasons -- to access a file you need permission on the file AS WELL AS permission to search all the directories from the root needed to get at the file. If you could access a file by inode, you could bypass the checks on the containing directories.
This allows you to create a file that can be accessed by a set of users (or a set of groups) and not anyone else -- create directories that are only accessable by the the users (one dir per user), and then hard-link the file into all of those directories -- the file itself is accessable by anyone, but can only actually be accessed by someone who has search permissions on one of the directories it is linked into.
Some Operating Systems do have that facility. For example, OS X needs it to support the Carbon File Manager, and on Linux you can use debugfs. Of course, you can do it on any UNIX from the command-line via find -inum, but the real reason you can't access files by inode is that it isn't particularly useful. It does kindof circumvent file permissions, because if there's a file you can read in a folder you can't read or execute, then opening the inode lets you discover it.
The reason it isn't very useful is that you need to find an inode number via a *stat() call, at which point you already have the filename (or an open fd)...or you need to guess the inum.
In response to your comment: To "pass a file", you can use fd passing over AF_LOCAL sockets by means of SCM_RIGHTS (see man 7 unix).
Btrfs does have an ioctl for that (BTRFS_IOC_INO_PATHS added in this patch), however it does no attempt to check permissions along the path, and is simply reserved to root.
Surely if you've already looked up a file via a path, you shouldn't have to do it again and again?
stat(f,&s); i=open(f,O_MODE);
involves two trawls through a directory structure. This wastes CPU cycles with unnecessary string operations. Yes, the well-designed file system cache will hide most of this inefficiency from a casual end-user, but repeating work for no reason is ugly if not plain silly.

Append only file

I'm trying to realize a file. Each event just appends one line to the file. So far this is a no brainer. The hard part is that several users are supposed to be able to add entries to that file but no one is supposed to be able to modify or delete existing ones. Can I somehow enforce this using file access rights? I'm using Linux.
Thx
On linux you have the option of using the system append-only flag. This is not available on all filesystems.
This attribute is set using the chattr utility and you should view the man page. Only root can set this attribute.
On Ubuntu you'll probably end up doing:
sudo chattr +a filename
The classic permissions, read, write, and execute won't get you there. If you have write permission you can delete the file, and all the lines in it.
You'll need some kind of program to arbitrate the file access. One way would be to open up a fifo and have the producers write to the fifo. If the writes are not too big (4k writes are atomic on my linux box) the different writes won't get intermixed. By making the consumer process have priviledges that the producers don't have, the producers won't be able to see the final results.
You could use something like syslog to do this.

Resources