How can I show printk() message in console? - linux

The information which is printed by printk() can only be seen under Alt+Ctrl+F1 ~ F7 console.
These consoles are very inconvenient for debugging since they can't roll back. I am using KDE desktop environment and console terminal, how could I redirect the printk() message to console?

The syntax of printk is
printk ("log level" "message", <arguments>);
kernel defines 8 log levels in the file printk.h
#define KERN_EMERG "<0>" /* system is unusable*/
#define KERN_ALERT "<1>" /* action must be taken immediately*/
#define KERN_CRIT "<2>" /* critical conditions*/
#define KERN_ERR "<3>" /* error conditions*/
#define KERN_WARNING "<4>" /* warning conditions*/
#define KERN_NOTICE "<5>" /* normal but significant condition*/
#define KERN_INFO "<6>" /* informational*/
#define KERN_DEBUG "<7>" /* debug-level messages*/
Each log level corresponds to a number and the lower the number higher the importance of the message.
The levels are useful in deciding what should be displayed to the user on the console and what should not be.
Every console has log level called as the console log level and any message with a log level number lesser than the console log level gets displayed on the console, and other messages which have a log level number higher or equal to the console log level are logged in the kernel log(kernel buffer) which can be looked into using the command "dmesg".
The console loglevel can be found by looking into the file /proc/sys/kernel/printk
$ cat /proc/sys/kernel/printk
4 4 1 7
The first number in the output is the console log level, the second is the default log level, third is the minimum log level and fourth is the maximum log level.
Log level 4 corresponds to KERN_WARNING. Thus all the messages with log levels 3,2,1 and 0 will get displayed on the screen as well as logged and the messages with log level 4,5,6,7 only get logged and can be viewed using "dmesg".
The console log level can be changed by writing into the proc entry
$ echo "6" > /proc/sys/kernel/printk
$ cat /proc/sys/kernel/printk
6 4 1 7
Now the console log level is set to 6, which is KERN_INFO.
Here you want to print out every message so you should set your console level at highest number "8"
echo "8" > /proc/sys/kernel/printk
tail -f /var/log/kern.log &
or
cat /proc/kmsg & (Android Environment)

Use
dmesg -wH &
to force all your kernel messages, that are printed to dmesg (and also the virtual terminals like Ctrl+Alt+F1 , depending on your /proc/sys/kernel/printk log level and a level of your message), to also appear at your SSH or GUI console: Konsole, Terminal or whatever you are using! And, if you need to monitor only for the specific messages:
dmesg -wH | grep ERR &
I'm using it to monitor for the "ERROR" messages like
printk(KERN_EMERG "ERROR!\n");
that I printk from my driver

printk() is a function provided by the Linux kernel to print debug/information/error messages. Internally, the kernel maintains a circular buffer that is __LOG_BUF_LEN bytes long (depending on the configuration, it can range from 4KB to 1MB).
There are 8 possible loglevels associated to messages and defined in linux/kernel.h:
KERN_EMERG: Emergency (system is unusable)
KERN_ALERT: Serious problem (i.e. action must be taken immediately)
KERN_CRIT: Critical condition, usually related to hardware or software failure
KERN_ERR: Used for error conditions, usually related to hardware difficulties
KERN_WARNING: Used to warn about problematic situations that are not serious
KERN_NOTICE: Normal situations that require notification
KERN_INFO: Informational messages; many drivers print information about the hardware found
KERN_DEBUG: Used only for debugging
Each string represents a number ranging from 0 to 7, with smaller values representing higher priorities. The default log level is equal to the
DEFAULT_MESSAGE_LOGLEVEL variable specified in kernel/printk/printk.c.
How messages can be read from user-level depends both on the configuration of some user-level daemons (e.g., klogd and syslogd) and on the default loglevel. To answer your question, depending on your specific configuration, one or more of the following commands will allow you to read the output of printk:
The dmesg console command (usually, the preferred way for one-shot manual checking)
The tail -f /var/log/kern.log command
Through /proc/kmsg (discouraged)
Depending on your configuration, you may also want to change the default loglevel shown in console. Starting from klogd 2.1.31, the default loglevel can be changed by echoing into /proc/sys/kernel/printk. Examples:
echo 5 > /proc/sys/kernel/printk will display on console only messages with loglevel from 0 to 4
echo 8 > /proc/sys/kernel/printk will display on console messages with any loglevel

Related

Opensips suddenly crash in two-three days running

I am using opensips, it is working fine but after 2-3 days it suddenly crash. Don't understand following log
CRITICAL:core:receive_fd: EOF on 17
INFO:core:handle_sigs: child process 14090 exited by a signal 11
INFO:core:handle_sigs: core was generated
INFO:core:handle_sigs: terminating due to SIGCHLD
CRITICAL:core:receive_fd: EOF on 17
INFO:core:handle_sigs: child process 14090 exited by a signal 11
INFO:core:handle_sigs: core was generated
INFO:core:handle_sigs: terminating due to SIGCHLD
INFO:core:sig_usr: signal 15 received
How can I investigate what is exactly going wrong with my opensips. I am using Ubuntu, should I change it to Centos or Debian? or what above log dictate error? any idea.
The log isn't telling you anything other than that it's crashed. The question is why.
If you run the same version & config on a different environment you'll probably have the same issues.
The time dependence of the crashes would suggest it's crashing when a specific race condition is met. This could be a call coming in with an invalid Caller ID you're trying to parse as an int, a routing block that's seldom called being called, a resource limitation on the system, or something totally different.
This is a pretty generic crash message, so without more debugging it's just guesswork, so let's enable debugging:
The start of the OpenSIPs config file is where we enable, here's how the default config looks (assuming you've built off the standard template):
####### Global Parameters #########
log_level=3
log_stderror=no
log_facility=LOG_LOCAL0
children=4
/* uncomment the following lines to enable debugging */
#debug_mode=yes
If you change yours to:
####### Global Parameters #########
log_level=8
log_stderror=yes
log_facility=LOG_LOCAL0
children=4
/* uncomment the following lines to enable debugging */
debug_mode=yes
You'll have debugging features enabled and a whole lot more info available in syslog.
Once you've done that sit back and wait for 2 days until it crashes, and you'll have an answer as to what module / routing block / packet is causing your instance to crash.
After that you can post the output here along with your config file, but there's a pretty high chance that someone on the OpenSIPs or Kamailio mailing lists will have had the same issue before.

what is $InputFilePollInterval in rsyslog.conf? by increasing this value will it impact on level of logging?

in rsyslog configuration file we configured like all application logs are to be write in /var/log/messages but the logs get written at very high rate, how can i decrease the level of logging at application level
Hope this is what you are looking for.
Open the file in a text editor:
/etc/rsyslog.conf
change the following parameter to what you think is good for you:
$SystemLogRateLimitInterval 3
$SystemLogRateLimitBurst 40
restart rsyslogd
service rsyslog restart
$InputFilePollInterval equivalent to: “PollingInterval”
PollingInterval seconds
Default: 10
This setting specifies how often files are to be polled for new data.
The time specified is in seconds. During each polling interval, all
files are processed in a round-robin fashion.
A short poll interval provides more rapid message forwarding, but
requires more system resources. While it is possible, we stongly
recommend not to set the polling interval to 0 seconds
.
There are a few approaches to this, and it depends on what exactly you're looking to do, but you'll likely want to look into separating your facilities into separate output files, based on severity. This can be done using RFC5424 severity priority levels in your configuration file.
By splitting logging into separate files by facility and/or severity, and setting the stop option, messages based on severity can be output to as many or few files as you like.
Example (set in the rsyslog.conf file):
*.*;auth,authpriv,kern.none /var/log/syslog
kern.* /var/log/kern.log
kern.debug stop
*.=debug;\
auth,authpriv.none;\
news.none;mail.none /var/log/debug
This configuration:
Will not output output any kern facility messages to syslog (due to kern.none)
Will output all debug level logging of kern to kern.log and "stop" there
Will output any other debug logs that are not excluded by .none to debug
How you separate things out is up to you, but I would recommend looking over the first link I included. You also may want to look into the different local facilities that can be used as separate log pipelines.

printk loglevel usage in module programming

In the book LDD3 by Rubini, under the printk section the author says that we can give log levels/priority to our messages. But I tried with a simple module program having different log-level of printks but it showing the same order in which I have written the printk message inside the program, why it is not printing according to the priority?
I have copied the code here
#include<linux/module.h>
#include<linux/kernel.h>
static __init int log_init(void)
{
printk(KERN_INFO"inside init 4 \n");
printk(KERN_ERR"inside init 3\n");
printk(KERN_CRIT"inside init 2\n");
return 0;
}
static __exit void log_exit(void)
{
printk("inside exit\n");
}
module_init(log_init);
module_exit(log_exit);
MODULE_LICENSE("GPL");
And I got a output as follows
[ 1508.721441] inside init 4
[ 1508.721448] inside init 3
[ 1508.721454] inside init 2
root#jitesh-desktop:~/DD/debug/print#
so how I can print it according to priority like
init 2
init 3
init 4
You are confusing the purpose of the printk priorities. They are not meant to change the order of execution as you are wishing here.
By assigning different priorities to different kernel messages, we can filter out the desired messages that appear on the console by specifying an appropriate value of loglevel through the kernel command line. For example, in the linux kernel. there are numerous messages with KERN_DEBUG priority. These are just ordinary debugging messages. So if you enable loglevel to the maximum 7, then you'll see a flood of messages on the console!! And your vital errors and warnings will be buried under this flurry of normal debugging messages.
So when you are debugging serious issues, you can specify the loglevel to a low value so that only critcal errors and warnings are displayed on the console.
Note: Irrespective of the loglevel, all printk messages are stored in the kernel buffer. The priority decides which one of them goes to the console.

Where does output of print in kernel go?

I am debugging a driver for linux (specifically ubuntu server 9.04), and there are several printf statements in the code.
Where can I view the output of these statements?
EDIT1: What i'm trying to do is write to kernel using the proc file-system.
The print code is
static int proc_fractel_config_write(struct file *file, const char *argbuf, unsigned long count, void *data)
{
printk(KERN_DEBUG "writing fractel config\n");
...
In kern.log, I see the following message when i try to overwrite the file /proc/net/madwifi/ath1/fractel_config (with varying time of course).
[ 8671.924873] proc write
[ 8671.924919]
Any explainations?
Many times KERN_DEBUG level messages are filtered and you need to explicitly increase the logging level. You can see what the system defaults are by examining /proc/sys/kernel/printk. For example, on my system:
# cat /proc/sys/kernel/printk
4 4 1 7
the first number shows the console log level is KERN_WARNING (see proc(5) man pages for more information). This means KERN_NOTICE, KERN_INFO, and KERN_DEBUG messages will be filtered from the console. To increase the logging level or verbosity, use dmesg
$ sudo dmesg -n 7
$ cat /proc/sys/kernel/printk
7 4 1 7
Here, setting the level to 7 (KERN_DEBUG) will allow all levels of messages to appear on the console. To automate this, add loglevel=N to the kernel boot parameters where N is the log level you want going to the console or ignore_loglevel to print all kernel messages to the console.
It depends on the distribution, but many use klogd(8) to get the messages from the kernel and will either log them to a file (sometimes /var/log/dmesg or /var/log/kernel) or to the system log via syslog(3). In the latter case, where the log entries end up will depend on the configuration of syslogd(8).
One note about the dmesg command: Kernel messages are stored in a circular buffer, so large amounts of output will be overwritten.
You'll get the output with the command dmesg
dmesg outputs all the messages from the kernel. Finding your desired messages would be difficult. Better use dmesg and grep combination and use a driver specific label in all your printk messages. That will ease in eliminating all the unwanted messages.
printk("test: hello world")
dmesg | grep test
I had this problem on Ubuntu 11.10 and 10.04 LTS, on the former I edited /etc/rsyslog.d/50-default.conf, then restarted rsyslog using "sudo service rsyslog restart" to restart rsyslogd. Then it worked.
Note that Ubuntu uses *r*syslogd, not syslogd.
You might try a higher level than KERN_DEBUG, for example KERN_INFO. Depending on your configuration the lowest priority messages might not be displayed.
In centos (Atleast in centos 6.6) the output will be in /var/log/messages

Can syslog Performance Be Improved?

We have an application on Linux that used the syslog mechanism. After a week spent trying to figure out why this application was running slower than expected, we discovered that if we eliminated syslog, and just wrote directly to a log file, performance improved dramatically.
I understand why syslog is slower than direct file writes. But I was wondering: Are there ways to configure syslog to optimize its performance?
You can configure syslogd (and rsyslog at least) not to sync the log files after a log message by prepending a "-" to the log file path in the configuration file. This speeds up performance at the expense of the danger that log messages could be lost in a crash.
There are several options to improve syslog performance:
Optimizing out calls with a macro
int LogMask = LOG_UPTO(LOG_WARNING);
#define syslog(a, ...) if ((a) & LogMask ) syslog((a), __VA_ARGS__)
int main(int argc, char **argv)
{
LogMask = setlogmask(LOG_UPTO(LOG_WARNING));
...
}
An advantage of using a macro to filter syslog calls is that the entire call is
reduced to a conditional jump on a global variable, very helpful if you happen to
have DEBUG calls which are translating large datasets through other functions.
setlogmask()
setlogmask(LOG_UPTO(LOG_LEVEL))
setlogmask() will optimize the call by not logging to /dev/log, but the program will
still call the functions used as arguments.
filtering with syslog.conf
*.err /var/log/messages
"check out the man page for syslog.conf for details."
configure syslog to do asynchronous or buffered logging
metalog used to buffer log output and flushed it in blocks. stock syslog and syslog-ng
do not do this as far as I know.
Before embarking in new daemon writing you can check if syslog-ng is faster (or can be configured to be faster) than plain old syslog.
One trick you can use if you control the source to the logging application is to mask out the log level you want in the app itself, instead of in syslog.conf. I did this years ago with an app that generated a huge, huge, huge amount of debug logs. Rather than remove the calls from the production code, we just masked so that debug level calls never got sent to the daemon. I actually found the code, it's Perl but it's just a front to the setlogmask(3) call.
use Sys::Syslog;
# Start system logging
# setlogmask controls what levels we're going to let get through. If we mask
# them off here, then the syslog daemon doesn't need to be concerned by them
# 1 = emerg
# 2 = alert
# 4 = crit
# 8 = err
# 16 = warning
# 32 = notice
# 64 = info
# 128 = debug
Sys::Syslog::setlogsock('unix');
openlog($myname,'pid,cons,nowait','mail');
setlogmask(127); # allow everything but debug
#setlogmask(255); # everything
syslog('debug',"syslog opened");
Not sure why I used decimal instead of a bitmask... shrug
Write your own syslog implementation. :-P
This can be accomplished in two ways.
Write your own LD_PRELOAD hook to override the syslog functions, and make them output to stderr instead. I actually wrote a post about this many years ago: http://marc.info/?m=97175526803720 :-P
Write your own syslog daemon. It's just a simple matter of grabbing datagrams out of /dev/log! :-P
Okay, okay, so these are both facetious answers. Have you profiled syslogd to see where it's choking up most?
You may configure syslogd's level (or facility) to log asynchronously, by putting a minus before path to logfile (ie.: user.* [tab] -/var/log/user.log).
Cheers.
The syslog-async() implementation may help, at the risk of lost log lines / bounded delays at other times.
http://thekelleys.org.uk/syslog-async/
Note: 'asynchronous' here refers to queueing log events within your application, and not the asynchronous syslogd output file configuration option that other answers refer to.

Resources