printk loglevel usage in module programming - linux

In the book LDD3 by Rubini, under the printk section the author says that we can give log levels/priority to our messages. But I tried with a simple module program having different log-level of printks but it showing the same order in which I have written the printk message inside the program, why it is not printing according to the priority?
I have copied the code here
#include<linux/module.h>
#include<linux/kernel.h>
static __init int log_init(void)
{
printk(KERN_INFO"inside init 4 \n");
printk(KERN_ERR"inside init 3\n");
printk(KERN_CRIT"inside init 2\n");
return 0;
}
static __exit void log_exit(void)
{
printk("inside exit\n");
}
module_init(log_init);
module_exit(log_exit);
MODULE_LICENSE("GPL");
And I got a output as follows
[ 1508.721441] inside init 4
[ 1508.721448] inside init 3
[ 1508.721454] inside init 2
root#jitesh-desktop:~/DD/debug/print#
so how I can print it according to priority like
init 2
init 3
init 4

You are confusing the purpose of the printk priorities. They are not meant to change the order of execution as you are wishing here.
By assigning different priorities to different kernel messages, we can filter out the desired messages that appear on the console by specifying an appropriate value of loglevel through the kernel command line. For example, in the linux kernel. there are numerous messages with KERN_DEBUG priority. These are just ordinary debugging messages. So if you enable loglevel to the maximum 7, then you'll see a flood of messages on the console!! And your vital errors and warnings will be buried under this flurry of normal debugging messages.
So when you are debugging serious issues, you can specify the loglevel to a low value so that only critcal errors and warnings are displayed on the console.
Note: Irrespective of the loglevel, all printk messages are stored in the kernel buffer. The priority decides which one of them goes to the console.

Related

Are function locations altered when running a program through GDB?

I'm trying to run through a buffer overflow exercise, here is the code:
#include <stdio.h>
int badfunction() {
char buffer[8];
gets(buffer);
puts(buffer);
}
int cantrun() {
printf("This function cant run because it is never called");
}
int main() {
badfunction();
}
This is a simple piece of code. The objective is to overflow the buffer in badfunction()and override the return address having it point to the memory address of the function cantrun().
Step 1: Find the offset of the return address (in this case it's 12bytes, 8 for the buffer and 4 for the base pointer).
Step 2: Find the memory location of cantrun(), gdb say it's 0x0804849a.
When I run the program printf "%012x\x9a\x84\x04\x08" | ./vuln, I get the error "illegal instruction". This suggests to me that I have correctly overwritten the EIP, but that the memory location of cantrun() is incorrect.
I am using Kali Linux, Kernel 3.14, I have ASLR turned off and I am using execstack to allow an executable stack. Am I doing something wrong?
UPDATE:
As a shot in the dark I tried to find the correct instruction by moving the address around and 0x0804849b does the trick. Why is this different than what GDB shows. When running GDB, 0x0804849a is the location of the prelude instruction push ebp and 0x0804849b is the prelude instruction mov ebp,esp.
gdb doesn't do anything to change the locations of functions in the programs it executes. ASLR may matter, but by default gdb turns this off to enable simpler debugging.
It's hard to say why you are seeing the results you are. What does disassembling the function in gdb show?

How can I show printk() message in console?

The information which is printed by printk() can only be seen under Alt+Ctrl+F1 ~ F7 console.
These consoles are very inconvenient for debugging since they can't roll back. I am using KDE desktop environment and console terminal, how could I redirect the printk() message to console?
The syntax of printk is
printk ("log level" "message", <arguments>);
kernel defines 8 log levels in the file printk.h
#define KERN_EMERG "<0>" /* system is unusable*/
#define KERN_ALERT "<1>" /* action must be taken immediately*/
#define KERN_CRIT "<2>" /* critical conditions*/
#define KERN_ERR "<3>" /* error conditions*/
#define KERN_WARNING "<4>" /* warning conditions*/
#define KERN_NOTICE "<5>" /* normal but significant condition*/
#define KERN_INFO "<6>" /* informational*/
#define KERN_DEBUG "<7>" /* debug-level messages*/
Each log level corresponds to a number and the lower the number higher the importance of the message.
The levels are useful in deciding what should be displayed to the user on the console and what should not be.
Every console has log level called as the console log level and any message with a log level number lesser than the console log level gets displayed on the console, and other messages which have a log level number higher or equal to the console log level are logged in the kernel log(kernel buffer) which can be looked into using the command "dmesg".
The console loglevel can be found by looking into the file /proc/sys/kernel/printk
$ cat /proc/sys/kernel/printk
4 4 1 7
The first number in the output is the console log level, the second is the default log level, third is the minimum log level and fourth is the maximum log level.
Log level 4 corresponds to KERN_WARNING. Thus all the messages with log levels 3,2,1 and 0 will get displayed on the screen as well as logged and the messages with log level 4,5,6,7 only get logged and can be viewed using "dmesg".
The console log level can be changed by writing into the proc entry
$ echo "6" > /proc/sys/kernel/printk
$ cat /proc/sys/kernel/printk
6 4 1 7
Now the console log level is set to 6, which is KERN_INFO.
Here you want to print out every message so you should set your console level at highest number "8"
echo "8" > /proc/sys/kernel/printk
tail -f /var/log/kern.log &
or
cat /proc/kmsg & (Android Environment)
Use
dmesg -wH &
to force all your kernel messages, that are printed to dmesg (and also the virtual terminals like Ctrl+Alt+F1 , depending on your /proc/sys/kernel/printk log level and a level of your message), to also appear at your SSH or GUI console: Konsole, Terminal or whatever you are using! And, if you need to monitor only for the specific messages:
dmesg -wH | grep ERR &
I'm using it to monitor for the "ERROR" messages like
printk(KERN_EMERG "ERROR!\n");
that I printk from my driver
printk() is a function provided by the Linux kernel to print debug/information/error messages. Internally, the kernel maintains a circular buffer that is __LOG_BUF_LEN bytes long (depending on the configuration, it can range from 4KB to 1MB).
There are 8 possible loglevels associated to messages and defined in linux/kernel.h:
KERN_EMERG: Emergency (system is unusable)
KERN_ALERT: Serious problem (i.e. action must be taken immediately)
KERN_CRIT: Critical condition, usually related to hardware or software failure
KERN_ERR: Used for error conditions, usually related to hardware difficulties
KERN_WARNING: Used to warn about problematic situations that are not serious
KERN_NOTICE: Normal situations that require notification
KERN_INFO: Informational messages; many drivers print information about the hardware found
KERN_DEBUG: Used only for debugging
Each string represents a number ranging from 0 to 7, with smaller values representing higher priorities. The default log level is equal to the
DEFAULT_MESSAGE_LOGLEVEL variable specified in kernel/printk/printk.c.
How messages can be read from user-level depends both on the configuration of some user-level daemons (e.g., klogd and syslogd) and on the default loglevel. To answer your question, depending on your specific configuration, one or more of the following commands will allow you to read the output of printk:
The dmesg console command (usually, the preferred way for one-shot manual checking)
The tail -f /var/log/kern.log command
Through /proc/kmsg (discouraged)
Depending on your configuration, you may also want to change the default loglevel shown in console. Starting from klogd 2.1.31, the default loglevel can be changed by echoing into /proc/sys/kernel/printk. Examples:
echo 5 > /proc/sys/kernel/printk will display on console only messages with loglevel from 0 to 4
echo 8 > /proc/sys/kernel/printk will display on console messages with any loglevel

SIGSEGV Crash but unable to collect backtrace

Information about the application:
Linux - 2.4.1 Kernel
m68k based embedded application
Single process multithreaded application
We have an application where we have implemented the connection for the SIGSEGV with a segmentation_handler function. In this segmentation handler we create a file, do a file write (like "obtained stack frame"), then using backtrace and symbols write all the stack trace into the same file.
Problem: We get a SIGSEGV (confirmed due to creation of the log file) but unfortunately the file is empty (0kb file) with no information in it. (Even the first string which is a plain string is not available in the file).
I want to understand in what scenarios such a thing can happen because we can solve the crash if we get the stack trace, but we don't have it and the mechanism to get it did not work either :(
void segmentation_handler(int signal_no) {
char buffer[512]; .............
InitLog();//Create a log file
printf("\n*** segmentation fault occured ***\n");
fflush(stdout);
memset(buffer, 0, 512);
size = backtrace (array, 50);
strings = backtrace_symbols (array, size);
sprintf(buffer, "Obtained %d stack frames.\n", size);
Log(buffer);// Write the buffer into the file
for (n = 0; n < size; n++) {
sprintf(buffer, "%s\n", strings[n]); Log(buffer);
}
CloseLog();
}
Your segmentation handler is very naive and contains multiple errors. Here is a short list:
You are calling fprintf() and multiple other functions which are not async signal safe. Consider, fprintf uses a lock internally to synch multiple calls to the same file descriptor from multiple threads. What if your segmentation fault was in the middle of printf and the lock was taken? you would dead lock in the middle of the segmentation handlers...
You are allocating memory (call to backtrace_symbols), but if the segmentation fault was due to malloc arena corruption (a very likely cause of segmentation violations) you would double fault inside the segmentation handler.
If multiple threads cause an exception in the same time the code will open multiple times the file and run over the log.
There are other problems, but these are the basics...
There is a video on my lecture on how to write proper fault handlers available here: http://free-electrons.com/pub/video/2008/ols/ols2008-gilad-ben-yossef-fault-handlers.ogg
Remove the segmentation handler.
Allow the program to dump core (ulimit -c unlimited or setrlimit in process)
see if you have a core file.
do the backtrace thing offline using your toolchain debugger
You can also write a program that segfault on purpose, and test both method (ie post mortem using the core file, or in signal handler).

Disable randomization of memory addresses

I'm trying to debug a binary that uses a lot of pointers. Sometimes for seeing output quickly to figure out errors, I print out the address of objects and their corresponding values, however, the object addresses are randomized and this defeats the purpose of this quick check up.
Is there a way to disable this temporarily/permanently so that I get the same values every time I run the program.
Oops. OS is Linux fsttcs1 2.6.32-28-generic #55-Ubuntu SMP Mon Jan 10 23:42:43 UTC 2011 x86_64 GNU/Linux
On Ubuntu , it can be disabled with...
echo 0 > /proc/sys/kernel/randomize_va_space
On Windows, this post might be of some help...
http://blog.didierstevens.com/2007/11/20/quickpost-another-funny-vista-trick-with-aslr/
To temporarily disable ASLR for a particular program you can always issue the following (no need for sudo)
setarch `uname -m` -R ./yourProgram
You can also do this programmatically from C source before a UNIX exec.
If you take a look at the sources for setarch (here's one source):
http://code.metager.de/source/xref/linux/utils/util-linux/sys-utils/setarch.c
You can see if boils down to a system call (syscall) or a function call (depending on what your system defines). From setarch.c:
#ifndef HAVE_PERSONALITY
# include <syscall.h>
# define personality(pers) ((long)syscall(SYS_personality, pers))
#endif
On my CentOS 6 64-bit system, it looks like it uses a function (which probably calls the self-same syscall above). Take a look at this snippet from the include file in /usr/include/sys/personality.h (as referenced as <sys/personality.h> in the setarch source code):
/* Set different ABIs (personalities). */
extern int personality (unsigned long int __persona) __THROW;
What it boils down to, is that you can, from C code, call and set the personality to use ADDR_NO_RANDOMIZE and then exec (just like setarch does).
#include <sys/personality.com>
#ifndef HAVE_PERSONALITY
# include <syscall.h>
# define personality(pers) ((long)syscall(SYS_personality, pers))
#endif
...
void mycode()
{
// If requested, turn off the address rand feature right before execing
if (MyGlobalVar_Turn_Address_Randomization_Off) {
personality(ADDR_NO_RANDOMIZE);
}
execvp(argv[0], argv); // ... from set-arch.
}
It's pretty obvious you can't turn address randomization off in the process you are in (grin: unless maybe dynamic loading), so this only affects forks and execs later. I believe the Address Randomization flags are inherited by child sub-processes?
Anyway, that's how you can programmatically turn off the address randomization in C source code. This may be your only solution if you don't want the force a user to intervene manually and start-up with setarch or one of the other solutions listed earlier.
Before you complain about security issues in turning this off, some shared memory libraries/tools (such as PickingTools shared memory and some IBM databases) need to be able to turn off randomization of memory addresses.

Where does output of print in kernel go?

I am debugging a driver for linux (specifically ubuntu server 9.04), and there are several printf statements in the code.
Where can I view the output of these statements?
EDIT1: What i'm trying to do is write to kernel using the proc file-system.
The print code is
static int proc_fractel_config_write(struct file *file, const char *argbuf, unsigned long count, void *data)
{
printk(KERN_DEBUG "writing fractel config\n");
...
In kern.log, I see the following message when i try to overwrite the file /proc/net/madwifi/ath1/fractel_config (with varying time of course).
[ 8671.924873] proc write
[ 8671.924919]
Any explainations?
Many times KERN_DEBUG level messages are filtered and you need to explicitly increase the logging level. You can see what the system defaults are by examining /proc/sys/kernel/printk. For example, on my system:
# cat /proc/sys/kernel/printk
4 4 1 7
the first number shows the console log level is KERN_WARNING (see proc(5) man pages for more information). This means KERN_NOTICE, KERN_INFO, and KERN_DEBUG messages will be filtered from the console. To increase the logging level or verbosity, use dmesg
$ sudo dmesg -n 7
$ cat /proc/sys/kernel/printk
7 4 1 7
Here, setting the level to 7 (KERN_DEBUG) will allow all levels of messages to appear on the console. To automate this, add loglevel=N to the kernel boot parameters where N is the log level you want going to the console or ignore_loglevel to print all kernel messages to the console.
It depends on the distribution, but many use klogd(8) to get the messages from the kernel and will either log them to a file (sometimes /var/log/dmesg or /var/log/kernel) or to the system log via syslog(3). In the latter case, where the log entries end up will depend on the configuration of syslogd(8).
One note about the dmesg command: Kernel messages are stored in a circular buffer, so large amounts of output will be overwritten.
You'll get the output with the command dmesg
dmesg outputs all the messages from the kernel. Finding your desired messages would be difficult. Better use dmesg and grep combination and use a driver specific label in all your printk messages. That will ease in eliminating all the unwanted messages.
printk("test: hello world")
dmesg | grep test
I had this problem on Ubuntu 11.10 and 10.04 LTS, on the former I edited /etc/rsyslog.d/50-default.conf, then restarted rsyslog using "sudo service rsyslog restart" to restart rsyslogd. Then it worked.
Note that Ubuntu uses *r*syslogd, not syslogd.
You might try a higher level than KERN_DEBUG, for example KERN_INFO. Depending on your configuration the lowest priority messages might not be displayed.
In centos (Atleast in centos 6.6) the output will be in /var/log/messages

Resources