I used the following command to trace the kernel.
$ trace-cmd record -p function_graph ls
$ trace-cmd report
but I saw the following result just show the address instead of the function name.
MtpServer-4877 [000] 1706.014074: funcgraph_exit: + 23.875 us | }
trace-cmd-4892 [001] 1706.014075: funcgraph_entry: 2.000 us | ffff0000082cdc24();
trace-cmd-4895 [002] 1706.014076: funcgraph_entry: 1.250 us | ffff00000829e704();
MtpServer-4877 [000] 1706.014076: funcgraph_entry: 1.375 us | ffff0000083266bc();
kswapd0-1024 [003] 1706.014078: funcgraph_entry: | ffff00000827956c() {
kswapd0-1024 [003] 1706.014081: funcgraph_entry: | ffff00000827801c() {
trace-cmd-4895 [002] 1706.014081: funcgraph_entry: 1.375 us | ffff0000082bd8b4();
MtpServer-4877 [000] 1706.014082: funcgraph_entry: 1.375 us | ffff0000082ccefc();
trace-cmd-4892 [001] 1706.014082: funcgraph_entry: | ffff0000082c5adc() {
kswapd0-1024 [003] 1706.014084: funcgraph_entry: 1.500 us | ffff00000828c8f0();
trace-cmd-4892 [001] 1706.014085: funcgraph_entry: 1.250 us | ffff0000082c5a58();
MtpServer-4877 [000] 1706.014088: funcgraph_entry: 1.125 us | ffff0000082e3a30();
trace-cmd-4895 [002] 1706.014089: funcgraph_exit: + 19.125 us | }
kswapd0-1024 [003] 1706.014090: funcgraph_entry: 1.500 us | ffff0000090b6c04();
trace-cmd-4895 [002] 1706.014090: funcgraph_entry: | ffff0000082d4ffc() {
trace-cmd-4892 [001] 1706.014092: funcgraph_exit: 6.875 us | }
trace-cmd-4895 [002] 1706.014093: funcgraph_entry: 1.000 us | ffff0000090b3a40();
May I know how to show the exact function name on the trace-cmd result?
I ran into this issue, and for me it was caused by /proc/kallsyms (a file which mappings from kernel addresses to symbol names) displaying all zeros for kernel addresses.
Since this is a procfs "file" it's behavior can, and does vary depending on what context you are accessing it from.
In this case, it is due to security checks.
If you pass the checks, the addresses will be nonzero and look similar to this:
000000000000c000 A exception_stacks
0000000000014000 A entry_stack_storage
0000000000015000 A espfix_waddr
0000000000015008 A espfix_stack
If you fail the checks they will be zero, like this:
(In my case I did not have the CAP_SYSLOG capability because I was running in a container and systemd-nspawn's default behaviour was dropping the capability. [1])
0000000000000000 A exception_stacks
0000000000000000 A entry_stack_storage
0000000000000000 A espfix_waddr
0000000000000000 A espfix_stack
This is influenced by kernel settings and more information (along with the rather simple checking source code) can be found on the answers in Allow single user to access /proc/kallsyms
To fix this issue you need to start your trace-cmd with proper permissions / capabilities.
[1] This is what the capability settings look like for an nspawned process with no modifications:
# cat /proc/2894/status | grep -i cap
CapInh: 0000000000000000
CapPrm: 00000000fdecafff
CapEff: 00000000fdecafff
CapBnd: 00000000fdecafff
CapAmb: 0000000000000000
This is what it looks like for a process started by root in a "normal" context:
# cat /proc/616428/status | grep -i cap
CapInh: 0000000000000000
CapPrm: 000001ffffffffff
CapEff: 000001ffffffffff
CapBnd: 000001ffffffffff
CapAmb: 0000000000000000
Alternative solution
This is speculation on my part, but I expect this manner of censoring in the kallsyms file is to prevent defeating kASLR by just reading addresses of memory locations out of the file.
It seems that if you directly use the tracefs ftrace interfaces, as opposed to reading raw trace events and then resolving symbols (I don't know if there is a way to make trace-cmd do this), then the kernel will resolve the symbol names for you (and this way does not expose kernel addresses) and so this is not an issue.
For example (note how it says !cap_syslog):
# capsh --print
Current: =ep cap_syslog-ep
Bounding set = [...elided for brevity ...]
Ambient set =
Current IAB: !cap_syslog
Securebits: 00/0x0/1'b0 (no-new-privs=0)
secure-noroot: no (unlocked)
secure-no-suid-fixup: no (unlocked)
secure-keep-caps: no (unlocked)
secure-no-ambient-raise: no (unlocked)
uid=0(root) euid=0(root)
gid=0(root)
groups=0(root)
Guessed mode: HYBRID (4)
# echo 1 > events/enable
# echo function_graph > current_tracer
# echo 1 > tracing_on
# head trace
# tracer: function_graph
#
# CPU DURATION FUNCTION CALLS
# | | | | | | |
2) 0.276 us | } /* fpregs_assert_state_consistent */
1) 2.052 us | } /* exit_to_user_mode_prepare */
1) | syscall_trace_enter.constprop.0() {
1) | __traceiter_sys_enter() {
1) | /* sys_futex(uaddr: 7fc3565b9378, op: 80, val: 0, utime: 0, uaddr2: 0, val3: 0) */
1) | /* sys_enter: NR 202 (7fc3565b9378, 80, 0, 0, 0, 0) */
Related
It's well-documented how to use ftrace to find the function graph starting from a certain function, e.g.
# echo nop > current_tracer
# echo 100 > max_graph_depth
# echo ksys_dup3 > set_graph_function
# echo function_graph > current_tracer
# cat trace
# tracer: function_graph
#
# CPU DURATION FUNCTION CALLS
# | | | | | | |
7) | ksys_dup3() {
7) 0.533 us | expand_files();
7) | do_dup2() {
7) | filp_close() {
7) 0.405 us | dnotify_flush();
7) 0.459 us | locks_remove_posix();
7) | fput() {
7) | fput_many() {
7) | task_work_add() {
7) 0.533 us | kick_process();
7) 1.558 us | }
7) 2.475 us | }
7) 3.382 us | }
7) 6.122 us | }
7) 7.104 us | }
7) + 10.763 us | }
But this only returns the function graph starting at ksys_dup3. It omits the full function graph that leads to ksys_dup3:
7) | el0_svc_handler() {
7) | el0_svc_common() {
7) | __arm64_sys_dup3() {
7) | ksys_dup3() {
7) 0.416 us | expand_files();
7) | do_dup2() {
7) | filp_close() {
7) 0.405 us | dnotify_flush();
7) 0.406 us | locks_remove_posix();
7) | fput() {
7) 0.416 us | fput_many();
7) 1.269 us | }
7) 3.819 us | }
7) 4.746 us | }
7) 6.475 us | }
7) 7.381 us | }
7) 8.362 us | }
7) 9.205 us | }
Is there a way to use ftrace to filter a full function graph?
I'd say all of ftrace is well documented (here). There is no way to do what you want though, because the filtering ability of ftrace is implemented through triggers that can start/stop tracing exactly when an event happens. Setting set_graph_function will trigger a trace start when the function is entered and a trace stop when it's exited. Since when you enter el0_svc_handler you cannot know beforehand if ksys_dup3 is going to be called, there is no way to write a trigger to start the trace on such a condition (that would require being able to "predict the future").
You can however do fine grained filtering with the set_ftrace_pid parameter writing a program that only does what you need, running a full trace on that program or filtering on the parent el0_svc_handler.
In case you don't actually know which parent functions are being called before the one you want, you can do a single targeted run with the func_stack_trace option enabled to get an idea about what the entire call chain is, and then use that output to set the appropriate filter for a "normal" run.
For example, let's say I want to trace do_dup2, but as you say I want to start from one of the parent functions. I'll first write a dummy test program like the following one:
int main(void) {
printf("My PID is %d, press ENTER to go...\n", getpid());
getchar();
dup2(0, 1);
return 0;
}
I'll compile and start the above program in one shell:
$ ./test
My PID is 1234, press ENTER to go...
Then in another root shell, configure tracing as follows:
cd /sys/kernel/tracing
echo function > current_tracer
echo 1 > options/func_stack_trace
echo do_dup2 > set_ftrace_filter
echo PID_OF_THE_PROGRAM_HERE > set_ftrace_pid
Note: echo do_dup2 > set_ftrace_filter is very important otherwise you will trace and dump the stack for every single kernel function, which would be a huge performance hit and can make the system unresponsive. For the same reason doing echo PID_OF_THE_PROGRAM_HERE > set_ftrace_pid is also important if you don't want to trace every single dup2 syscall done by the system.
Now I can do one trace:
echo 1 > tracing_on
# ... press ENTER in the other shell ...
echo 0 > tracing_on
cat trace
And the result will be something like this (my machine is x86 so the syscall entry functions have different names):
# tracer: function
#
# entries-in-buffer/entries-written: 2/2 #P:20
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
a.out-6396 [000] .... 1455.370160: do_dup2 <-__x64_sys_dup2
a.out-6396 [000] .... 1455.370174: <stack trace>
=> 0xffffffffc322006a
=> do_dup2
=> __x64_sys_dup2
=> do_syscall_64
=> entry_SYSCALL_64_after_hwframe
I can now see the entire call chain that led to do_dup2 (printed in reverse order above) and then do another normal trace from one of the parent functions (remember to disable options/func_stack_trace first).
Story
Case 1
I accidentally wrote my Assembly code in the .data section. I compiled it and executed it. The program ran normally under Linux 5.4.0-53-generic even though I didn't specify a flag like execstack.
Case 2:
After that, I executed the program under Linux 5.9.0-050900rc5-generic. The program got SIGSEGV. I inspected the virtual memory permission by reading /proc/$pid/maps. It turned out that the section is not executable.
I think there is a configuration on Linux that manages that permission. But I don't know where to find.
Code
[Linux 5.4.0-53-generic]
Run (normal)
ammarfaizi2#integral:/tmp$ uname -r
5.4.0-53-generic
ammarfaizi2#integral:/tmp$ cat test.asm
[section .data]
global _start
_start:
mov eax, 60
xor edi, edi
syscall
ammarfaizi2#integral:/tmp$ nasm --version
NASM version 2.14.02
ammarfaizi2#integral:/tmp$ nasm -felf64 test.asm -o test.o
ammarfaizi2#integral:/tmp$ ld test.o -o test
ammarfaizi2#integral:/tmp$ ./test
ammarfaizi2#integral:/tmp$ echo $?
0
ammarfaizi2#integral:/tmp$ md5sum test
7ffff5fd44e6ff0a278e881732fba525 test
ammarfaizi2#integral:/tmp$
Check Permission (00400000-00402000 rwxp), so it is executable.
## Debug
gef➤ shell cat /proc/`pgrep test`/maps
00400000-00402000 rwxp 00000000 08:03 7471589 /tmp/test
7ffff7ffb000-7ffff7ffe000 r--p 00000000 00:00 0 [vvar]
7ffff7ffe000-7ffff7fff000 r-xp 00000000 00:00 0 [vdso]
7ffffffde000-7ffffffff000 rwxp 00000000 00:00 0 [stack]
ffffffffff600000-ffffffffff601000 --xp 00000000 00:00 0 [vsyscall]
gef➤
[Linux 5.9.0-050900rc5-generic]
Run (Segfault)
root#esteh:/tmp# uname -r
5.9.0-050900rc5-generic
root#esteh:/tmp# cat test.asm
[section .data]
global _start
_start:
mov eax, 60
xor edi, edi
syscall
root#esteh:/tmp# nasm --version
NASM version 2.14.02
root#esteh:/tmp# nasm -felf64 test.asm -o test.o
root#esteh:/tmp# ld test.o -o test
root#esteh:/tmp# ./test
Segmentation fault (core dumped)
root#esteh:/tmp# echo $?
139
root#esteh:/tmp# md5sum test
7ffff5fd44e6ff0a278e881732fba525 test
root#esteh:/tmp#
Check Permission (00400000-00402000 rw-p), so it is NOT executable.
## Debug
gef➤ shell cat /proc/`pgrep test`/maps
00400000-00402000 rw-p 00000000 fc:01 2412 /tmp/test
7ffff7ff9000-7ffff7ffd000 r--p 00000000 00:00 0 [vvar]
7ffff7ffd000-7ffff7fff000 r-xp 00000000 00:00 0 [vdso]
7ffffffde000-7ffffffff000 rw-p 00000000 00:00 0 [stack]
ffffffffff600000-ffffffffff601000 --xp 00000000 00:00 0 [vsyscall]
gef➤
objdump -p
root#esteh:/tmp# objdump -p test
test: file format elf64-x86-64
Program Header:
LOAD off 0x0000000000000000 vaddr 0x0000000000400000 paddr 0x0000000000400000 align 2**12
filesz 0x0000000000001009 memsz 0x0000000000001009 flags rw-
Questions
Where is the configuration on Linux that manages default ELF sections permission?
Are my observations on permissions correct?
Summary
Default permission for .data section on Linux 5.4.0-53-generic is executable.
Default permission for .data section on Linux 5.9.0-050900rc5-generic is NOT executable.
Your binary is missing PT_GNU_STACK. As such, this change appears to have been caused by commit 9fccc5c0c99f238aa1b0460fccbdb30a887e7036:
From 9fccc5c0c99f238aa1b0460fccbdb30a887e7036 Mon Sep 17 00:00:00 2001
From: Kees Cook <keescook#chromium.org>
Date: Thu, 26 Mar 2020 23:48:17 -0700
Subject: x86/elf: Disable automatic READ_IMPLIES_EXEC on 64-bit
With modern x86 64-bit environments, there should never be a need for
automatic READ_IMPLIES_EXEC, as the architecture is intended to always
be execute-bit aware (as in, the default memory protection should be NX
unless a region explicitly requests to be executable).
There were very old x86_64 systems that lacked the NX bit, but for those,
the NX bit is, obviously, unenforceable, so these changes should have
no impact on them.
Suggested-by: Hector Marco-Gisbert <hecmargi#upv.es>
Signed-off-by: Kees Cook <keescook#chromium.org>
Signed-off-by: Borislav Petkov <bp#suse.de>
Reviewed-by: Jason Gunthorpe <jgg#mellanox.com>
Link: https://lkml.kernel.org/r/20200327064820.12602-4-keescook#chromium.org
---
arch/x86/include/asm/elf.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
index 397a1c74433ec..452beed7892bb 100644
--- a/arch/x86/include/asm/elf.h
+++ b/arch/x86/include/asm/elf.h
## -287,7 +287,7 ## extern u32 elf_hwcap2;
* CPU: | lacks NX* | has NX, ia32 | has NX, x86_64 |
* ELF: | | | |
* ---------------------|------------|------------------|----------------|
- * missing PT_GNU_STACK | exec-all | exec-all | exec-all |
+ * missing PT_GNU_STACK | exec-all | exec-all | exec-none |
* PT_GNU_STACK == RWX | exec-stack | exec-stack | exec-stack |
* PT_GNU_STACK == RW | exec-none | exec-none | exec-none |
*
## -303,7 +303,7 ## extern u32 elf_hwcap2;
*
*/
#define elf_read_implies_exec(ex, executable_stack) \
- (executable_stack == EXSTACK_DEFAULT)
+ (mmap_is_ia32() && executable_stack == EXSTACK_DEFAULT)
struct task_struct;
--
cgit 1.2.3-1.el7
This was first present in the 5.8 series. See also Unexpected exec permission from mmap when assembly files included in the project.
This is only a guess: I think the culprit is the READ_IMPLIES_EXEC personality that was being set automatically in the absence of a PT_GNU_STACK segment.
In the 5.4 kernel source we can find this piece of code:
SET_PERSONALITY2(loc->elf_ex, &arch_state);
if (elf_read_implies_exec(loc->elf_ex, executable_stack))
current->personality |= READ_IMPLIES_EXEC;
That's the only thing that can transform an RW section into an RWX one. Any other use of PROC_EXEC didn't seem to be changed or relevant to this question, to me.
The executable_stack is set here:
for (i = 0; i < loc->elf_ex.e_phnum; i++, elf_ppnt++)
switch (elf_ppnt->p_type) {
case PT_GNU_STACK:
if (elf_ppnt->p_flags & PF_X)
executable_stack = EXSTACK_ENABLE_X;
else
executable_stack = EXSTACK_DISABLE_X;
break;
But if the PT_GNU_STACK segment is not present, that variable retains its default value:
int executable_stack = EXSTACK_DEFAULT;
Now this workflow is identical in both 5.4 and the latest kernel source, what changed is the definition of elf_read_implies_exec:
Linux 5.4:
/*
* An executable for which elf_read_implies_exec() returns TRUE will
* have the READ_IMPLIES_EXEC personality flag set automatically.
*/
#define elf_read_implies_exec(ex, executable_stack) \
(executable_stack != EXSTACK_DISABLE_X)
Latest Linux:
/*
* An executable for which elf_read_implies_exec() returns TRUE will
* have the READ_IMPLIES_EXEC personality flag set automatically.
*
* The decision process for determining the results are:
*
* CPU: | lacks NX* | has NX, ia32 | has NX, x86_64 |
* ELF: | | | |
* ---------------------|------------|------------------|----------------|
* missing PT_GNU_STACK | exec-all | exec-all | exec-none |
* PT_GNU_STACK == RWX | exec-stack | exec-stack | exec-stack |
* PT_GNU_STACK == RW | exec-none | exec-none | exec-none |
*
* exec-all : all PROT_READ user mappings are executable, except when
* backed by files on a noexec-filesystem.
* exec-none : only PROT_EXEC user mappings are executable.
* exec-stack: only the stack and PROT_EXEC user mappings are executable.
*
* *this column has no architectural effect: NX markings are ignored by
* hardware, but may have behavioral effects when "wants X" collides with
* "cannot be X" constraints in memory permission flags, as in
* https://lkml.kernel.org/r/20190418055759.GA3155#mellanox.com
*
*/
#define elf_read_implies_exec(ex, executable_stack) \
(mmap_is_ia32() && executable_stack == EXSTACK_DEFAULT)
Note how in the 5.4 version the elf_read_implies_exec returned a true value if the stack was not explicitly marked as not executable (via the PT_GNU_STACK segment).
In the latest source, the check is now more defensive: the elf_read_implies_exec is true only on 32-bit executable, in the case where no PT_GNU_STACK segment was found in the ELF binary.
I assembled your program, linked it, and found no PT_GNU_STACK segment, so this may be the reason.
If this is indeed the issue and if I followed the code correctly, if you set the stack as not executable in the binary, its data section should not be mapped executable anymore (not even on Linux 5.4).
I want to print CPU number on which the current process or function is executing similar to ftrace like this:
TASK-PID CPU# TIMESTAMP FUNCTION
| | | | |
<idle>-0 [002] 23636.756054: ttwu_do_activate.constprop.89 <-try_to_wake_up
<idle>-0 [002] 23636.756054: activate_task <-ttwu_do_activate.constprop.89
<idle>-0 [002] 23636.756055: enqueue_task <-activate_task
How do I get that value? I suppose its there in some function of start_kernel function. Can we print its value? I am using linux-4.1 kernel.
For printing current cpu in kernel, the cpu field of task_struct can be used. Note that the kernel configuration CONFIG_THREAD_INFO_IN_TASK should be enabled. This will work for 4.9 kernel.
printk("My current cpu is %d\n", current->cpu);
smp_processor_id() also can be used if cpu field is not available.
I have installed Ubuntu 12.04 (32bit). The current tracer is set as nop.
cat current_tracer
nop
Although the current tracer is nop, all these following messages are printing and continuously printed while I am performing other operations. Why is this so happening ? How can I disable to print these messages being printed?
<...>-573 [003] .... 6.304043: do_sys_open: "/etc/modprobe.d/blacklist-firewire.conf" 0 666
<...>-573 [003] .... 6.304055: do_sys_open: "/etc/modprobe.d/blacklist-framebuffer.conf" 0 666
<...>-569 [000] .... 6.304073: do_sys_open: "/run/udev/data/c4:73" 88000 666
<...>-573 [003] .... 6.304077: do_sys_open: "/etc/modprobe.d/blacklist-modem.conf" 0 666
<...>-573 [003] .... 6.304087: do_sys_open: "/etc/modprobe.d/blacklist-oss.conf" 0 666
<...>-573 [003] .... 6.304119: do_sys_open: "/etc/modprobe.d/blacklist-rare-network.conf" 0 666
<...>-573 [003] .... 6.304135: do_sys_open: "/etc/modprobe.d/blacklist-watchdog.conf" 0 666
<...>-573 [003] .... 6.304166: do_sys_open: "/etc/modprobe.d/blacklist.conf" 0 666
<...>-569 [000] .... 6.304180: do_sys_open: "/run/udev/data/c4:73.tmp" 88241 666
<...>-573 [003] .... 6.304190: do_sys_open: "/etc/modprobe.d/vmwgfx-fbdev.conf" 0 666
Thank you in advance.
Have you tried echo 0 > tracing_on?
Have you tried echo notrace_printk > trace_options?
However, if you are concerned about ftrace's overhead, you should do more than this and disable ftrace entirely.
If you are unsure of how to deal with ftrace, you can also look at the trace-cmd command.
In particular, try trace-cmd reset.
in the linux, lsmod lists a lot of modules. but how can we find where those module loaded from.
for some modules,linux command "modprobe -l" shows a path but some are not.
edited
i also tried "find" and "locate". both of them lists all kind of versions
locate fake
/svf/SVDrv/kernel/linux/.fake.ko.cmd
/svf/SVDrv/kernel/linux/.fake.mod.o.cmd
/svf/SVDrv/kernel/linux/.fake.o.cmd
/svf/SVDrv/kernel/linux/fake.ko
/svf/SVDrv/kernel/linux/fake.mod.o
/svf/SVDrv/kernel/linux/fake.o
/svf/SVDrv.03.11.2014.16.00/kernel/linux/.fake.ko.cmd
/svf/SVDrv.03.11.2014.16.00/kernel/linux/.fake.mod.o.cmd
/svf/SVDrv.03.11.2014.16.00/kernel/linux/.fake.o.cmd
/svf/SVDrv.03.11.2014.16.00/kernel/linux/fake.ko
/svf/SVDrv.03.11.2014.16.00/kernel/linux/fake.mod.o
/svf/SVDrv.03.11.2014.16.00/kernel/linux/fake.o
/svf/SVDrv.04.29.2014.17.39/kernel/linux/.fake.ko.cmd
/svf/SVDrv.04.29.2014.17.39/kernel/linux/.fake.mod.o.cmd
/svf/SVDrv.04.29.2014.17.39/kernel/linux/.fake.o.cmd
/svf/SVDrv.04.29.2014.17.39/kernel/linux/fake.ko
/svf/SVDrv.04.29.2014.17.39/kernel/linux/fake.mod.o
/svf/SVDrv.04.29.2014.17.39/kernel/linux/fake.o
/svf/SVDrv.05.05.2014.11.25/kernel/linux/.fake.ko.cmd
/svf/SVDrv.05.05.2014.11.25/kernel/linux/.fake.mod.o.cmd
/svf/SVDrv.05.05.2014.11.25/kernel/linux/.fake.o.cmd
/svf/SVDrv.05.05.2014.11.25/kernel/linux/fake.ko
/svf/SVDrv.05.05.2014.11.25/kernel/linux/fake.mod.o
/svf/SVDrv.05.05.2014.11.25/kernel/linux/fake.o
/svf/SVDrv.05.05.2014.17.43/kernel/linux/.fake.ko.cmd
/svf/SVDrv.05.05.2014.17.43/kernel/linux/.fake.mod.o.cmd
/svf/SVDrv.05.05.2014.17.43/kernel/linux/.fake.o.cmd
/svf/SVDrv.05.05.2014.17.43/kernel/linux/fake.ko
/svf/SVDrv.05.05.2014.17.43/kernel/linux/fake.mod.o
/svf/SVDrv.05.05.2014.17.43/kernel/linux/fake.o
/svf/SVDrv.05.07.2014.14.59/kernel/linux/.fake.ko.cmd
/svf/SVDrv.05.07.2014.14.59/kernel/linux/.fake.mod.o.cmd
/svf/SVDrv.05.07.2014.14.59/kernel/linux/.fake.o.cmd
/svf/SVDrv.05.07.2014.14.59/kernel/linux/fake.ko
/svf/SVDrv.05.07.2014.14.59/kernel/linux/fake.mod.o
/svf/SVDrv.05.07.2014.14.59/kernel/linux/fake.o
Sorry if the answer comes a bit late but I just stumbled across this particular question myself today...
To minimize manual labor here is my listing of the paths curretly loaded modules are loaded from:
awk '{ print $1 }' /proc/modules | xargs modinfo -n | sort
I needed this to create a minimal kernel image containg only the modules i really need.
Unfortunately lsmod only displays the name field which does not alwys match the modules# file name (e.g phy-am335x-control.ko and phy_am335x_control).
I hope this helps.
You can use "locate" or "find" command on these modules to find where they are , for example
[root#localhost core_src]# lsmod
Module Size Used by
iptable_filter 2793 0
ipt_MASQUERADE 2466 1
iptable_nat 6158 1
vmware_balloon 7199 0
i2c_piix4 12608 0
i2c_core 31276 1 i2c_piix4
shpchp 33482 0
ext4 371331 2
mbcache 8144 1 ext4
jbd2 93312 1 ext4
sd_mod 39488 4
crc_t10dif 1541 1 sd_mod
sr_mod 16228 0
cdrom 39803 1 sr_mod
mptspi 17051 3
mptscsih 36828 1 mptspi
mptbase 94005 2 mptspi,mptscsih
scsi_transport_spi 26151 1 mptspi
pata_acpi 3701 0
ata_generic 3837 0
ata_piix 22846 0
dm_mirror 14101 0
dm_region_hash 12170 1 dm_mirror
dm_log 10122 2 dm_mirror,dm_region_hash
dm_mod 81692 2 dm_mirror,dm_log
[root#localhost core_src]# locate vmware_balloon
/lib/modules/2.6.32-279.el6.x86_64/kernel/drivers/misc/vmware_balloon.ko
Get the paths from the list of loaded modules. Without the need for awk.
while IFS= read -r line;
do modinfo -n "${line%% *}"
done < /proc/modules | sort