Can please summarize the events/steps that happen when I try to execute a read()/write() system call. How does the kernel know which file system to issue these commands.
Lets say a process calls write().
Then It will call sys_write().
Now probably, since sys_write() is executed on behalf of the current process, it can access the struct task_struct and hence it can access the struct files_struct and struct fs_struct which contains file system information.
But after that I am not seeing, how this fs_struct is helping to identify the file system.
Edit: Now that Alex has described the flow...I have still doubt how the read/write are getting routed to a FS, since the VFS does not do it, then it must be happening somewhere else, Also how is the underlying block device and then finally the hardware protocol PCI/USB getting attached.
A simple flow chart involving actual data structures would be helpful
Please help.
This answer is based on kernel version 4.0. I traced out some of the code which handles a read syscall. I recommend you clone the Linux source repo and follow along in the source code.
Syscall handler for read, at fs/read_write.c:620 is called. It receives a file descriptor (integer) as an argument, and calls fdget_pos to convert it to a struct fd.
fdget_pos calls __fdget_pos calls __fdget calls __fget_light. __fget_light uses current->files, the file descriptor table for the current process, to look up the struct file which corresponds to the passed file descriptor number.
Back in the syscall handler, the file struct is passed to vfs_read, at fs/read_write.c:478.
vfs_read calls __vfs_read, which calls file->f_op->read. From here on, you are in filesystem-specific code.
So the VFS doesn't really bother "identifying" the filesystem which a file lives on; it simply uses the table of "file operation" function pointers which is stored in its struct file. When that struct file is initialized, it is given the correct f_op function pointer table which implements all the filesystem-specific operations for its filesystem.
Each filesystem registers itself to VFS. When a filesystem is mounted, its superblock is read and VFS superblock is populated with this information. Function pointer table for this filesystem is also populated at this time. when file->f_op->read call happens, registered function from the filesystem is actually called. You can refer to text in http://www.science.unitn.it/~fiorella/guidelinux/tlk/node102.html
Related
Recently, I am working on developing an Application Whitelisting solution for embedded linux based on the Linux Security Framework. The main focus of my LSM is implementing the bprm_check_security hook, invoked, when a program executing in the user-space (we do not consider kernel processes).
This hook is given a pointer of type "struct linux_binprm *bprm". This pointer includes a file pointer (including the executable file of the executed program), and a char pointer (including the name of the executed program).
Our application whitelisting solution is based on hash calculation. Accordingly, in my LSM, I use the file pointer(contained in the bprm pointer) to calculate a new hash value and store that value together with the filename (in the bprm pointer) as an entry in a list.
However, during the linux boot (before the /sbin/init is executed), there are missmatches between the filename, and the file pointer.
For instance, in one of first executing programs, the filename in the bprm pointer is "/bin/cat", however, the file pointer in the same bprm pointer is not the actual file of /bin/cat, rather busybox.
After researching for a long time, I found out, that those files are executed by busybox to create an initial initrd, which consequently create the actual rootfs, and all of those files have the magic number RAMFS_MAGIC (stored in inode->i_sb->s_magic). So I used this number to filter those processes, however, I am not sure, whether it would be the right way or not. I would appreciate any helps.
It is to be noted that, I use the file pointer (included in the bprm pointer) to calculate the hash values, in other words, I dont read files depending on their filename or filepath from the userspace.
thanks.
/include/linux/binfmts.h
struct linux_binprm {
struct file * file;
const char * filename; /* Name of binary as seen by procps */
};
I am currently developing a simple kernel module that can steal system calls such as open, read, write and replace them with a simple function which logs the files being opened, read, written, into a file and return the original system calls.
My query is, I am able to get the File Descriptor in read and write system calls, but I am not able to understand how to obtain file name using the same.
Currently I am able to access the file structure associated with given FD using following code:
struct file *file;
file = fcheck(fd);
This file structure has two important entities in it, which are of my concern I believe:
f_path
f_inode
Can anybody help me get dentry or inode or the path name associated with this fd using the file structure associated with it?
Is my approach correct? Or do I need to do something different?
I am using Ubuntu 14.04 and my kernel version is 3.19.0-25-generic, for the kernel module development.
.f_inode is actually an inode.
.f_path->dentry is a dentry.
Traversing this dentry via ->d_parent link, until f_path.mnt.mnt_root dentry will be touched, and collecting dentry->d_name components, will construct the file's path, relative to the mount point. This is done, e.g., with d_path, but in more carefull way.
Instead of fcheck(fd), which should be used inside RCU read section, you can also use fget(fd), which should be paired with fput().
The approach is completely incorrect - see http://www.watson.org/~robert/2007woot/
Linux already has a reliable mechanism for doing this thing (audit). If you want to implement it anyway (for fun I presume), you want to place your hooks roughly where audit is doing that. Chances are LSM hooks are in appropriate places, have not checked.
I am reading about device drivers and I have a question related to the UNIX philosophy regarding everything as file.
When a user issues a command say for eg, opening a file then what comes into action - System call or File Operation?
sys_open is a system call and open is a file operation. Can you please elaborate on the topic.
Thanks in advance.
Quick answer, I hope it'll help:
All system calls work the same way. The system call number is stored somewhere (e.g. in a register) together with the system call parameters. In case of open system calls parameters are: pointer to the filename and permissions string. Then the open function raises a software interruption using the adequate intruction (syscall, int ..., it depends on the HW).
As for any interruption, the kernel is invoked (in kernel mode) to handle the interruption. The system detects that the interruption was caused by a system call, then read the system call number in the register sees it is a open system call, create the file descriptor in the kernel memory and proceed to actually open the file by calling the driver open function. The file descriptor id is then stored back into a register and returns to user mode.
The file descriptor is then retrieved from the register and returned by open().
"Each open file (represented internally by a file
structure, which we will examine shortly) is associated with its own set of functions
(by including a field called f_op that points to a file_operations structure). The
operations are mostly in charge of implementing the system calls and are therefore,
named open, read, and so on."
This is from LDD chapter Character Driver. can anyone please elaborate that what does the last line mean.
I need to know how to write a system call that blocks(lock) and unblocks(unlock) an archive(inode) or a partition(super_block) for read and write functions.
Example: these function are in fs.h
lock_super(struct super_block *);
unlock_super(struct super_block *);
How to obtain the super_block (/dev/sda1 for example)?
The lock_super and unlock_super calls are not meant to be controlled directly by the user level processes. It is only meant to be called by the VFS layer, when a operation(operation on inode) on the filesystem is called by the user process. If you still wish to do that, you have to write your own device driver and expose the desired functionality(locking unlocking of the inode) to the user level.
There are no current system calls that would allow you to lock, unlock inodes. There are many reasons why it is not wise to implement new system call, without due consideration. But if you wish to do that, you would need to write the handler of your own system call in the kernel. It seems you want fine-grain control over the file-system, perhaps you are implementing user-level file-system.
For the answer on how to get the super_block, every file-system module registers itself with the VFS(Virtual File System). VFS acts as a intermediate layer between user and the actual file-system. So, it is the VFS that knows about the function pointers to the lock_super and unlock_super methods. The VFS Superblock contains the "device info" and "set of pointers to the file-system superblock". You can get those pointers from here and call them. But remember, because the actual file-system is managed by the VFS, you would be potentially corrupting the data.
I am writing a program consisting of user program and a kernel module. The kernel module needs to gather data that it will then "send" to the user program. This has to be done via a /proc file. Now, I create the file, everything is fine, and spent ages reading the internet for answer and still cannot find one. How do you read/write a /proc file from the kernel space ? The write_proc and read_proc supplied to the procfile are used to read and write data from USER space, whereas I need the module to be able to write the /proc file itself.
That's not how it works. When a userspace program opens the files, they are generated on the fly on a case-by-case basis. Most of them are readonly and generated by a common mechanism:
Register an entry with create_proc_read_entry
Supply a callback function (called read_proc by convention) which is called when the file is read
This callback function should populate a supplied buffer and (typically) call proc_calc_metrics to update the file pointer etc supplied to userspace.
You (from the kernel) do not "write" to procfs files, you supply the results dynamically when userspace requests them.
One of the approaches to get data across to the user space would be seq_files. In order to configure (write) kernel parameters you may want to consider sys-fs nodes.
Thanks,
Vijay