When and which function is used for modifying file on sysfs of Linux? - linux

i'm analizing block layer's sysfs functions.
i added(attached) a file(diagram) which i made to explain function sequence flow of
/usr/src/linux-source-4.8.0/linux-source/4.8.0/block/blk-mq-sysfs.c.
I understanded these functions' relationship. but i couldn't find how kernel change values of attribute file.
i heard that these files are created in the /sysfs/ hierarchy by calling sysfs_create_group() function.
When i do some I/O requests, system make some files like below.
(i use nvme ssd 750 series)
root#leedoosol:/sys/devices/pci0000:00/0000:00:03.0/0000:03:00.0/nvme/nvme0/nvme0n1/mq/0/cpu0# ls
completed dispatched merged rq_list
kernel would have made these files to give us information about completed request numbers, dispatched number, merged number, pending request_list.
And kernel should have changed value of these file while dealing with I/O request. but i don't know when and how kernel change these value.
i want to know when and how kernel change these values of attribute file because i have to find out what these values of attribute file means exactly.
here my environments.
1.) 2 socket per 10cores
2.) Kernel version : 4.8.17
3.) intel SSD 750 series

maybe I found answer. show store functions are called when i read my attribute file.
kernel don't fix their value of attribute file. kernel don't need to.
when i use 'cat' to the attribute file(in my example 'dispatched'), the file will be opened and then, several structs concerned with that file will be created in ram(of coures, in case of sysfs, backing store will not exist).
read() function will be called and then, show() function will called.

Related

Convert Open VMS FDL (File Definition Language) to linux

I am working on a project where we are migrating from Open VMS to Unix/Linux.
There's a functionality called "FDL" in open vms, which i want to achieve in Unix.
What FDL actually does is , it defines a certain set of attributes for a file or a record, like fixing some block size for a particular file, file organization as sequential, variable or relative, specifying record size in a file beforehand, specifying carriage return(escape sequence) for record etc.
How can i set these attributes before a file gets created in unix.
FDL is merely a syntax/descriptive method to set/view OpenVMS file attributes (metadata) which has no equivalent in typical Linux file systems. Those attributes are implemented by the (Files-11 / ODS) file system an acted on by RMS (the OpenVMS Record management Services) for which again there is no equivalent in Linux although there are packages (sector7).
So much more than an FDL question , this is an RMS question.
RMS offers 'record' access where a record is a blob of byte defined in the file which can be read sequentially, by number or by key (indexed file). The attributes mentioned in the question are to do with simple sequential access, but there Linux just offers a byte-stream method. The application is supposed to know how much to read / when to stop reading. Possibly a (record) terminator like (frequently) (linefeed) is used but that's about it (fscanf).
Other than using a 'parallel' meta file, or reserving an initial byte stream in your files there is no standard way to store metadata on how to use the bytestream in the file, and making them hard to use by other applications.
All this to say: No Can Do.
Sorry.

DMIDecode product_uuid and product_serial.what is the difference?

There are product_uuid and product_serial files in dir /sys/class/dmi/id/.
How it are generated? What is the difference?
Can I change this files?
Is it save a value after reinstall operation system?
How it are generated?
Those values are generated in kernel code. You can find them pretty easily using git grep command (with keywords you are interested in) in your kernel source directory:
$ git grep --all-match -n -e '\bdmi\b' -e product_uuid -e product_serial
So, product_uuid and product_serial sysfs nodes are created in drivers/firmware/dmi-id.c:
DEFINE_DMI_ATTR_WITH_SHOW(product_serial, 0400, DMI_PRODUCT_SERIAL);
DEFINE_DMI_ATTR_WITH_SHOW(product_uuid, 0400, DMI_PRODUCT_UUID);
From DEFINE_DMI_ATTR_WITH_SHOW definition you can see that both attributes are accessed via sys_dmi_field_show() function, which in turn calls dmi_get_system_info(), which just returns corresponding element from dmi_ident array. This table is populated in dmi_decode() routine:
dmi_save_ident(dm, DMI_PRODUCT_SERIAL, 7);
dmi_save_uuid(dm, DMI_PRODUCT_UUID, 8);
So product_uuid is generated in dmi_save_uuid() function. Just read its code to understand how it's done.
product_serial is generated in dmi_save_ident() function. It boils down to code like this:
(struct dmi_header *)(dmi_base)[7];
where dmi_base is address (remapped to virtual memory obviously) of DMI table, and 7 corresponds to DMI_PRODUCT_SERIAL constant.
To better understand this please see SMBIOS specification, specifically Table 9 – System Information (Type 1) Structure, which corresponds to this command:
# dmidecode --type 1
What is the difference?
As for product_uuid -- look at SMBIOS specification, section 7.2.1 System - UUID. It has description and also table with explanation for each part of this number. Using that table you can decode your UUID and extract some information from it, like timestamp, etc.
As for product_serial -- I believe it's self-explanatory, it's just a serial number for your device. You can usually find it printed on some sticker on your computer. For example, for my laptop it's on the bottom. It's the same string that I see in /sys/class/dmi/id/product_serial.
Can I change this files?
Those files are actually not real files but just an interface to kernel functions. Read about sysfs for details. So in order to "change" those files you need to edit mentioned kernel files accordingly, then rebuild the whole kernel and boot it (instead of one provided by your distribution).
Also, as #ChristopheVu-Brugier mentioned in comment, you can change those values in DMI table (in some tricky way though). But I wouldn't recommend it. Those values definitely have some meaning and may be useful in some cases (if not for you, then for some software in your PC).
Is it save a value after reinstall operation system?
Those values are actually obtained from DMI table, which is hardcoded along with BIOS to permanent memory (flash chip with BIOS on your motherboard) and you just read those values from this DMI table using kernel functions by reading those files.

list_empty returns 1 (non zero) if the list is not empty

I am working on Linux Device Driver code. I cant reveal what exactly this code is used for. I will try my best to explain my situation. Below code will be executed in interrupt context when we receive an USB interrupt stating that there is some data from USB. The data will arrive as URB's and we will convert them into chunks, add them to Linux Circular Doubly linked list for further processing.
spin_lock(&demod->demod_lock);
/* Delete node from free list */
list_del(&chunk->list_head);
/* Add it to full list */
list_add_tail(&chunk->list_head,&demod->ChunkRefList.full);
/* Update chunk status */
chunk->status=CHUNKDESC_FULL_LIST;
/* Increment the chunks assigned to application */
demod->stats.assigned.c++;
spin_unlock(&demod->demod_lock);
printk("Value returned by list_empty is %d ",list_empty(&demod->ChunkRefList.full));
I repeat, this code gets executed in interrupt context. I am using this code as a part of embedded system which displays Audio & Video(AV) forever.
After approx 12 hours, the AV is frozen and when I analyse, I see the value of list_empty(&demod->ChunkRefList.full) returned is always 0x1 forever. Usually in working case where AV is playing fine, its value will be 0x0 stating that the list is not empty.
As you can see, when above code gets executed, it first adds a node to the full list and checks if full list is empty. Ideally, the value should be always 0x0. Why its value is 0x1
I am monitoring the value of list_empty(&demod->ChunkRefList.full) using timedoctor applicaiton which doesn't introduce any overhead in interrupt context. I have used printk above to make you understand that I am printing its value.
Finally, I was able to figure out the issue.
Since the code is implemented in interrupt context, using list_del followed by list_add_tail was causing issue.
So, I removed these two macros and introduced list_move_tail macro which resolved the issue.
By using list_move_tail, there is no need for us to remove the node and add it to the new list. The kernel will takecare of both operations.
So, to conclude, its better to make Linux Linked list manipulations as minimal as possible in interrupt context.

How can I read the routing table information from Kernel space?

I would like to be able to read the routing table from kernel space...
In user space, this information is readable in /proc/net/route, but I don't know how to read the same information from kernel space..
I don't want to modify, only read..
any ideas?
To fetch the routing table, you would need to send a message of type RTM_GETROUTE to the kernel using a socket of the AF_NETLINK family — this is the rtnetlink(7) interface.
For convenience, rather than sending messages over a socket, you can use the libnetlink(3) library, and call int rtnl_wilddump_request(struct rtnl_handle *rth, int family, RTM_GETROUTE).
For an even simpler cross-platform abstraction, you could use the libdnet library, which has a function int route_get(route_t *r, struct route_entry *entry).
You may find out where in kernel source code tree this file is created in '/proc' pseoudo filesystem by searching the "route" keyword or "create_proc_... smth" Take a look at how such files are created for your kernel in source.
I suspect it's located somewhere in net/ipv4/route.c

File in which the data structure for Global Descriptor and Local Descriptor table is defined?

I am reading Understanding Linux Kernel, and in it I read about the Global Descriptor table and the Local Descriptor Table.
In which source file(s)(of linux kernel) the data structure for GDT and LDT are defined?
A google search for the term "Linux Kernel file gdt" yields the exact results that you are looking for. This is the link to the search result of the book with the contents describing where the GDT and LDT are defined.
All GDTs are stored in the cpu_gdt_table array.
If you look in the source code index, you can see that these symbols are defined in the file arch/i386/kernel/head.S. However, I think the source code index can be viewed when you have a copy of the book. But nevertheless, the file where GDT is defined is given.
For the latest kernel, the GDT seems to be defined in at least 3 separate files.
arch/x86/include/asm/desc_defs.h
arch/x86/include/asm/desc.h
arch/x86/include/asm/segment.h
The layout of the main GDT seems to be defined in arch/x86/include/asm/segment.h around line 91. There are comments about the layout above this line.
The completed table is loaded in arch/x86/include/asm/desc.h with the function static inline void native_load_gdt(const struct desc_ptr *dtr) which just calls the assembly opcode lgdt. This is consistent with the way older kernels load the table into the processor. See line 303 here. However, I cannot find any calls to this function in the code base. Someone please help figure this out.
Also I cannot find the equivalent of defining the constants of the actual table as in line 479 in newer kernels.

Resources