how can i print something (for debugging purpose) to console inside a linux system call?
Or is there any not very diffucult way of debugging kernel code?
thanks
The accepted way of printing inside the kernel is via printk().
You should also check the different modifiers to printk (such as KERNEL_DEBUG) which will control where/how the messages are printed, including whether they are printed to all active terminals or just the system buffer
Related
If I run some command-line application in Linux, how to tell which files were accessed (read and/or written) by that process? I imagine I would need to place some hooks in the file-system driver and recompile the kernel, or something like that? Is there an easier way?
strace is a command will display each system call the application makes.
From the man page:
In the simplest case strace runs the specified command until it exits. It intercepts and records the system calls which are called by a process and the signals which are received by a process. The name of each system call, its arguments and its return value are printed on standard error or to the file specified with the -o option.
For instance, each open(), read() and write() operation will show the arguments and the return code.
You can get list of file access by your application by lsof command in linux Here is list of example
In addition of other answers mentionning lsof, strace (maybe ltrace could be useful too!), fs_usage you could use for process 1234 the directory /proc/1234/, in particular the opened file descriptors are available from /proc/1234/fd/; from inside your program you could use /proc/self/fd/. See proc(5)
Perhaps inotify(7) or ptrace(2) is relevant too.
This may be a stupid question, but is there a way to write to the linux console from within a driver without using printk (i.e. syslog)?
For instance, working in a linux driver, I need to output a single character as an event happens. I'd like to output 'w' as an write event starts, and a 'W' when it finishes. This happens frequently, so sending that through syslog isn't ideal.
Ideally, it would be great if I could just do the equivalent of printf("W") or putc('W') and have that simply go out the default console.
TIA
Mike
Writing to the console isn't something you want to do frequently. If printk is too expensive for you, you shouldn't try the console any way.
But if you insist:
Within printk, printing to the console is handled by call_console_drivers. This function finds the console (registered via register_console) and calls it to print the data. The actual driver depends on what console you're using. The VGA screen is one option, the serial port is another (depending on boot parameters).
You can try to use the functions in console.h to interact with the console directly. I don't know how hard it would be to make it work.
Unfortunately no, as there is no concept of "console" in the kernel (that is a userspace process). You may try other kernel debugging options
while typing a command like #ifconfig 10.0.0.10 up is it possible to see all "possible" prints inside kernel.
I know something like echo t > /proc/sysrq-trigger will give you stack trace with respect to processes running in a system.
What I am interested in is, with respect to a 'specific command' how can I get the kernel functions(stack trace) that gets executed?
I know about debuggers like kgdb,but I am interested in quick ways like sysrq methods if any.
Thanks.
The answer to your question is "ftrace". It is not a tool, not a command, but just a kernel feature built into most modern linux kernel.
For example, here you can use ftrace to understand how swap space are implemented (see all the key functions executed and its sequence inside the pastebin files indicated below):
http://tthtlc.wordpress.com/2013/11/19/using-ftrace-to-understanding-linux-kernel-api/
Read this carefully and you can see there are many ways of using ftrace (one is dump kernel stack trace which you requested, another is identifying executed function flow):
http://lwn.net/Articles/366796/
If you don't want to use ftrace, another option is to use QEMU: installing Linux inside the qemu guest is needed, and it is a lot more powerful, as you can use gdb to step through every lines (in C source code) or assembly.
https://tthtlc.wordpress.com/2014/01/14/how-to-do-kernel-debugging-via-gdb-over-serial-port-via-qemu/
Just in case you want to google further, this is called "kgdb", or gdbserver, and outside the qemu you are running a gdb client.
tail -f /var/log/kern.log should display any interaction that occurrs in the kernel.
It is more or less an equivalent to the dmesg command.
strace ifconfig 10.0.0.10 up will show all system calls called by ifconfig, but will not get inside kernel's calls
deal all,
i am a newbie for writing Linux Kernel Module.
i used printk function in linux kernel source code (2.4.29) for debugging and display messages.
now, i have to read all the messages i added via httpd.
i tried to write the messages into a file instead of printk function, so i can read the file directly.
but it's not work very well.
so, i have a stupid question...
is it possible to write a LKM to monitor the syslog and rewrite into another file??
i mean is that possible to let a LKM the aware the messages when each time the linux kernel execute "printk"??
thanks a lot
That is the wrong way to do it, because printk already does this : it writes in the file /proc/kmsg.
What you want is klogd, a user space utility dealing with /proc/kmsg.
Another options is to use dmesg, which will output the whole content of the kernel buffers holding the printk messages, but I suggest you first read the linked article
You never, ever, ever want to try to open a file on a user space mounted block file system from within the kernel. Imagine if the FS aborted and the kernel was still trying to write to it .. kaboom (amongst MANY other reasons why its a bad idea) :) As shodanex said, for your purposes, its much better to use klogd.
Now, generally speaking, you have several ways to communicate meaningful data to userspace programs, such as:
Create a character device driver that causes userspace readers to block while waiting for data. Provide an ioctl() interface to it which lets other programs find out how many messages have been sent, etc.
Create a node in /proc/yourdriver to accomplish the same thing
Really, the most practical means is to just use printk()
We have a server (written in C and C++) that currently catches a SEGV and dumps some internal info to a file. I would like to generate a core file and write it to disk at the time we catch the SEGV, so our support reps and customers don't have to fuss with ulimit and then wait for the crash to happen again in order to get a core file. We have used the abort function in the past, but it is subject to the ulimit rules and doesn't help.
We have some legacy code that reads /proc/pid/map and manually generates a core file, but it is out of date, and doesn't seem very portable (for example, I'm guessing it would not work in our 64 bit builds). What is the best way to generate and dump a core file in a Linux process?
Google has a library for generating coredumps from inside a running process called google-coredumper. This should ignore ulimit and other mechanisms.
The documentation for the call that generates the core file is here. According to the documentation, it seems that it is feasible to generate a core file in a signal handler, though it is not guaranteed to always work.
I saw pmbrett's post and thought "hey, thats cool" but couldn't find that utility anywhere on my system ( Gentoo ).
So I did a bit of prodding, and discovered GDB has this option in it.
gdb --pid=4049 --batch -ex gcore
Seemed to work fine for me.
Its not however very useful because it traps the very lowest function that was in use at the time, but it still does a good job outside that ( With no memory limitations, Dumped 350M snapshot of a firefox process with it )
Try using the Linux command gcore
usage: gcore [-o filename] pid
You'll need to use system (or exec) and getpid() to build up the right command line to call it from within your process
Some possible solutions^W ways of dealing with this situation:
Fix the ulimit!!!
Accept that you don't get a core file and run inside gdb, scripted to do a "thread all apply bt" on SIGSEGV
Accept that you don't get a core file and acquired a stack trace from within the application. The Stack Backtracing Inside Your Program article is pretty old but it should be possible these days too.
You can also change the ulimit() from within your program with setrlimit(2). Like the ulimit shell command, this can lower limits, or raise them as hard as the hard limit allows. At startup setrlimit() to allow core dumping, and you're fine.
I assume you have a signal handler that catches SEGV, for example, and does something like print a message and call _exit(). (Otherwise, you'd have a core file in the first place!) You could do something like the following.
void my_handler(int sig)
{
...
if (wantCore_ && !fork()) {
setrlimit(...); // ulimit -Sc unlimited
sigset(sig, SIG_DFL); // reset default handler
raise(sig); // doesn't return, generates a core file
}
_exit(1);
}
use backtrace and backtrace_symbols glibc calls to get the trace, just keep in mind that backtrace_symbols uses malloc internally and in case of heap corruption it might fail.
system ("kill -6 ")
I'd give it a try if you are still looking for something