How to read VFS attributes in Linux - linux

I have some question.
I try to read some VFS attributes, for example s_magic value in struct super_block.
but I cant read s_magic.
This is my code.
#include<stdio.h>
#include<fcntl.h>
#include<unistd.h>
#include<linux/fs.h>
int main()
{
int fd;
char boot[1024];
struct super_block sb;
fd = open("/dev/fd", O_RDONLY);
read(fd, boot, 1024);
read(fd, &sb, sizeof(struct super_block);
printf("%x\n", sb.s_magic);
close(fd);
return 0;
}
so, This code does't work with some error.
In this error, storage size of ‘sb’ isn’t known and invalid application of ‘sizeof’ to incomplete type ‘struct super_block’
Thank you.

That's because your linux/fs.h does not contain super_block declaration. That's because you want to include linux/fs.h from Linux kernel but actually include linux/fs.h from Linux userspace. Supply -I <include path> option to gcc like this
gcc -I /usr/src/kernels/$(uname -r)/include
BUT!
You will get a million of errors because you can't just include kernel headers in your userspace program - you don't have all type and struct definitions.
The kernel headers are not written with user space in mind, and they
can change at any time. The proper way for user-space applications to
interface with the kernel is by way of the C library, which provides
its own structures and, when necessary, translates them into whatever
the current kernel expects. This separation helps to keep user-space
programs from breaking when the kernel changes.
(source http://lwn.net/Articles/113349/)
So you have to revise your code.
P.S. I've given you and explanation why your code is not working, but I don't know how you can read super_block in userspace. You better ask another question regarding filesystem superblock API.

Related

Getting ENOTTY on ioctl for a Linux Kernel Module

I have the following chardev defined:
.h
#define MAJOR_NUM 245
#define MINOR_NUM 0
#define IOCTL_MY_DEV1 _IOW(MAJOR_NUM, 0, unsigned long)
#define IOCTL_MY_DEV2 _IOW(MAJOR_NUM, 1, unsigned long)
#define IOCTL_MY_DEV3 _IOW(MAJOR_NUM, 2, unsigned long)
module .c
static long device_ioctl(
struct file* file,
unsigned int ioctl_num,
unsigned long ioctl_param)
{
...
}
static int device_open(struct inode* inode, struct file* file)
{
...
}
static int device_release(struct inode* inode, struct file* file)
{
...
}
struct file_operations Fops = {
.open=device_open,
.unlocked_ioctl= device_ioctl,
.release=device_release
};
static int __init my_dev_init(void)
{
register_chrdev(MAJOR_NUM, "MY_DEV", &Fops);
...
}
module_init(my_dev_init);
My user code
ioctl(fd, IOCTL_MY_DEV1, 1);
Always fails with same error: ENOTTY
Inappropriate ioctl for device
I've seen similar questions:
i.e
Linux kernel module - IOCTL usage returns ENOTTY
Linux Kernel Module/IOCTL: inappropriate ioctl for device
But their solutions didn't work for me
ENOTTY is issued by the kernel when your device driver has not registered a ioctl function to be called. I'm afraid your function is not well registered, probably because you have registered it in the .unlocked_ioctl field of the struct file_operations structure.
Probably you'll get a different result if you register it in the locked version of the function. The most probable cause is that the inode is locked for the ioctl call (as it should be, to avoid race conditions with simultaneous read or write operations to the same device)
Sorry, I have no access to the linux source tree for the proper name of the field to use, but for sure you'll be able to find it yourself.
NOTE
I observe that you have used macro _IOW, using the major number as the unique identifier. This is probably not what you want. First parameter for _IOW tries to ensure that ioctl calls get unique identifiers. There's no general way to acquire such identifiers, as this is an interface contract you create between application code and kernel code. So using the major number is bad practice, for two reasons:
Several devices (in linux, at least) can share the same major number (minor allocation in linux kernel allows this) making it possible for a clash between devices' ioctls.
In case you change the major number (you configure a kernel where that number is already allocated) you have to recompile all your user level software to cope with the new device ioctl ids (all of them change if you do this)
_IOW is a macro built a long time ago (long ago from the birth of linux kernel) that tried to solve this problem, by allowing you to select a different character for each driver (but not dependant of other kernel parameters, for the reasons pointed above) for a device having ioctl calls not clashing with another device driver's. The probability of such a clash is low, but when it happens you can lead to an incorrect machine state (you have issued a valid, working ioctl call to the wrong device)
Ancient unix (and early linux) kernels used different chars to build these calls, so, for example, tty driver used 'T' as parameter for the _IO* macros, scsi disks used 'S', etc.
I suggest you to select a random number (not appearing elsewhere in the linux kernel listings) and then use it in all your devices (probably there will be less drivers you write than drivers in the kernel) and select a different ioctl id for each ioctl call. Maintaining a local ioctl file with the registered ioctls this way is far better than trying to guess a value that works always.
Also, a look at the definition of the _IO* macros should be very illustrative :)

What is the precise definition of the structure passed to the STAT system call?

Where can I find the precise definition of the structure that the STAT family of system calls expects?
Note that I'm referring to the raw system calls one would call in assembly, (System calls number 4, 5, and 6 on x86_64), and not the wrappers that are normally provided by libc.
The man page, stat(2), and what I could manage dig out of both the linux and glibc source code gave confusing and contradicting results (different structure field ordering, Extra fields, padding).
I'm certain that I'm at fault for not looking where I should, but I can't seem to find the information I'm after. Which led me to post this question.
Clarification: What I seek is the exact definition of the stat structure as returned by the system call on any given architecture. I am aware that I can determine this information experimentally. Experimentation is limited to my specific architecture. Furthermore, I expect something as crucial as a data structure used in Linux's ABI to be documented somewhere. I want to know where.
The question here does not have the information requested in this post. Please unmark this post as a duplicate.
struct stat definition strictly depends on your architecture. E.g. for x86_64 you can find it in arch/x86/include/uapi/asm/stat.h.
In user-space you can find the same structure in /usr/include/asm/stat.h file.
Here is the definition for x86_64:
struct stat {
__kernel_ulong_t st_dev;
__kernel_ulong_t st_ino;
__kernel_ulong_t st_nlink;
unsigned int st_mode;
unsigned int st_uid;
unsigned int st_gid;
unsigned int __pad0;
__kernel_ulong_t st_rdev;
__kernel_long_t st_size;
__kernel_long_t st_blksize;
__kernel_long_t st_blocks; /* Number 512-byte blocks allocated. */
__kernel_ulong_t st_atime;
__kernel_ulong_t st_atime_nsec;
__kernel_ulong_t st_mtime;
__kernel_ulong_t st_mtime_nsec;
__kernel_ulong_t st_ctime;
__kernel_ulong_t st_ctime_nsec;
__kernel_long_t __unused[3];
};

In general, on ucLinux, is ioctl faster than writing to /sys filesystem?

I have an embedded system I'm working with, and it currently uses the sysfs to control certain features.
However, there is function that we would like to speed up, if possible.
I discovered that this subsystem also supports and ioctl interface, but before rewriting the code, I decided to search to see which is a faster interface (on ucLinux) in general: sysfs or ioctl.
Does anybody understand both implementations well enough to give me a rough idea of the difference in overhead for each? I'm looking for generic info, such as "ioctl is faster because you've removed the file layer from the function calls". Or "they are roughly the same because sysfs has a very simple interface".
Update 10/24/2013:
The specific case I'm currently doing is as follows:
int fd = open("/sys/power/state",O_WRONLY);
write( fd, "standby", 7 );
close( fd );
In kernel/power/main.c, the code that handles this write looks like:
static ssize_t state_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t n)
{
#ifdef CONFIG_SUSPEND
suspend_state_t state = PM_SUSPEND_STANDBY;
const char * const *s;
#endif
char *p;
int len;
int error = -EINVAL;
p = memchr(buf, '\n', n);
len = p ? p - buf : n;
/* First, check if we are requested to hibernate */
if (len == 7 && !strncmp(buf, "standby", len)) {
error = enter_standby();
goto Exit;
((( snip )))
Can this be sped up by moving to a custom ioctl() where the code to handle the ioctl call looks something like:
case SNAPSHOT_STANDBY:
if (!data->frozen) {
error = -EPERM;
break;
}
error = enter_standby();
break;
(so the ioctl() calls the same low-level function that the sysfs function did).
If by sysfs you mean the sysfs() library call, notice this in man 2 sysfs:
NOTES
This System-V derived system call is obsolete; don't use it. On systems with /proc, the same information can be obtained via
/proc/filesystems; use that interface instead.
I can't recall noticing stuff that had an ioctl() and a sysfs interface, but probably they exist. I'd use the proc or sys handle anyway, since that tends to be less cryptic and more flexible.
If by sysfs you mean accessing files in /sys, that's the preferred method.
I'm looking for generic info, such as "ioctl is faster because you've removed the file layer from the function calls".
Accessing procfs or sysfs files does not entail an I/O bottleneck because they are not real files -- they are kernel interfaces. So no, accessing this stuff through "the file layer" does not affect performance. This is a not uncommon misconception in linux systems programming, I think. Programmers can be squeamish about system calls that aren't well, system calls, and paranoid that opening a file will be somehow slower. Of course, file I/O in the ABI is just system calls anyway. What makes a normal (disk) file read slow is not the calls to open, read, write, whatever, it's the hardware bottleneck.
I always use low level descriptor based functions (open(), read()) instead of high level streams when doing this because at some point some experience led me to believe they were more reliable for this specifically (reading from /proc). I can't say whether that's definitively true.
So, the question was interesting, I built a couple of modules, one for ioctl and one for sysfs, the ioctl implementing only a 4 bytes copy_from_user and nothing more, and the sysfs having nothing in its write interface.
Then a couple of userspace test up to 1 million iterations, here the results:
time ./sysfs /sys/kernel/kobject_example/bar
real 0m0.427s
user 0m0.056s
sys 0m0.368s
time ./ioctl /run/temp
real 0m0.236s
user 0m0.060s
sys 0m0.172s
edit
I agree with #goldilocks answer, HW is the real bottleneck, in a Linux environment with a well written driver choosing ioctl or sysfs doesn't make a big difference, but if you are using uClinux probably in your HW even few cpu cycles can make a difference.
The test I've done is for Linux not uClinux and it never wanted to be an absolute reference profiling the two interfaces, my point is that you can write a book about how fast is one or another but only testing will let you know, took me few minutes to setup the thing.

Using <linux/types.h> in user programs, or <stdint.h> in driver module code...does it matter?

I'm developing a device driver module and associated user libraries to handle the ioctl() calls. The library takes the pertinent info and puts it into a struct, which gets passed into the driver module and unpacked there and then dealt with (I'm omitting a lot of steps, but that's the overall idea).
Some of the data being passed through the struct via the ioctl() is uint32_t type. I've discovered that that type is defined in <stdint.h> AND <linux/types.h>. So far I've been using <linux/types.h> to define that value, including down in the user libraries. But I understand it is bad form to use <linux/*.h> libraries in user space, so if I remove those and use <stdint.h> instead, then when my driver module includes the struct definition, it will have to be including <stdint.h> also.
It seems to me that the point of <linux/types.h> is to define types in kernel files, so I'm not sure if that means using <stdint.h> is bad idea there. I also found that when trying to compile my driver module with <stdint.h>, I get compilation errors about redefinitions that won't go away, even if I replace all instances of <linux/types.h> with <stdint.h> (and put it on the top of the include order).
Is it a bad idea to use linux/*.h includes in user-space code?
Is it a bad idea to use <stdint.h> in kernel-space code?
If the answers to both of those is yes, then how do I handle the situation where a structure containing uint32_t is shared by both the user library and the driver module?
Is it a bad idea to use linux/*.h includes in user-space code?
Yes, usually. The typical situation is that you should be using the C-library headers (in this case, stdint.h and friends), and interface with the C library though those user-space types, and let the library handle talking with the kernel through kernel types.
You're not in a typical situation though. In your case, you're writing the driver library. So you should be presenting an interface to userspace using stdint.h, but using the linux/*.h headers when you interface to your kernel driver.
So the answer is no, in your case.
Is it a bad idea to use stdint.h in kernel-space code?
Most definitely yes.
See also: http://lwn.net/Articles/113349/
Fixed length integers in the Linux kernel
The Linux kernel already has fixed length integers which might interest you. In v4.9 under include/asm-generic/int-ll64.h:
typedef signed char s8;
typedef unsigned char u8;
typedef signed short s16;
typedef unsigned short u16;
typedef signed int s32;
typedef unsigned int u32;
typedef signed long long s64;
typedef unsigned long long u64;
LDD3 has a chapter about data sizes as well: https://static.lwn.net/images/pdf/LDD3/ch11.pdf
LDD3 mentions there that the best printk strategy is to cast to just cast to the largest integer possible with correct signedness: %lld or %llu. %ju appears unavailable under the printk formatting centerpiece lib/linux/vsprintf.c.

copy_to_user and copy_from_user with structs

I have a simple question: when i have to copy a structure's content from userspace to kernel space for example with an ioctl call (or viceversa) (for simplicity code hasn't error check):
typedef struct my_struct{
int a;
char b;
} my_struct;
Userspace:
my_struct s;
s.a = 11;
s.b = 'X';
ioctl(fd, MY_CMD, &s);
Kernelspace:
int my_ioctl(struct inode *inode, struct file *filp, unsigned int cmd,
unsigned long arg)
{
...
my_struct ks;
copy_from_user(&ks, (void __user *)arg, sizeof(ks));
...
}
i think that size of structure in userspace (variable s) and kernel space (variable ks) could be not the same (without specify the __attribute__((packed))). So is a right thing specifing the number of byte in copy_from_user with sizeof macro? I see that in kernel sources there are some structures that are not declared as packed so, how is ensured the fact that the size will be the same in userspace and kernelspace?
Thank you all!
Why should the layout of a struct be different in kernel space from user space? There is no reason for the compiler to layout data differently.
The exception is if userspace is a 32bit program running on a 64bit kernel. See http://www.x86-64.org/pipermail/discuss/2002-June/002614.html for a tutorial how to deal with this.
The userspace structure should come from kernel header, so struct definition should be the same in user and kernel space. Do you have any real example ?
Of course, if you play with different packing options on two side of an ABI, whatever it is, you are in trouble. The problem here is not sizeof.
If your question is : does packing options affect binary interface, the answer is yes.
If your question is, how can I solve a packing mismatch, please provide more information

Resources