Where does output of print in kernel go? - linux

I am debugging a driver for linux (specifically ubuntu server 9.04), and there are several printf statements in the code.
Where can I view the output of these statements?
EDIT1: What i'm trying to do is write to kernel using the proc file-system.
The print code is
static int proc_fractel_config_write(struct file *file, const char *argbuf, unsigned long count, void *data)
{
printk(KERN_DEBUG "writing fractel config\n");
...
In kern.log, I see the following message when i try to overwrite the file /proc/net/madwifi/ath1/fractel_config (with varying time of course).
[ 8671.924873] proc write
[ 8671.924919]
Any explainations?

Many times KERN_DEBUG level messages are filtered and you need to explicitly increase the logging level. You can see what the system defaults are by examining /proc/sys/kernel/printk. For example, on my system:
# cat /proc/sys/kernel/printk
4 4 1 7
the first number shows the console log level is KERN_WARNING (see proc(5) man pages for more information). This means KERN_NOTICE, KERN_INFO, and KERN_DEBUG messages will be filtered from the console. To increase the logging level or verbosity, use dmesg
$ sudo dmesg -n 7
$ cat /proc/sys/kernel/printk
7 4 1 7
Here, setting the level to 7 (KERN_DEBUG) will allow all levels of messages to appear on the console. To automate this, add loglevel=N to the kernel boot parameters where N is the log level you want going to the console or ignore_loglevel to print all kernel messages to the console.

It depends on the distribution, but many use klogd(8) to get the messages from the kernel and will either log them to a file (sometimes /var/log/dmesg or /var/log/kernel) or to the system log via syslog(3). In the latter case, where the log entries end up will depend on the configuration of syslogd(8).
One note about the dmesg command: Kernel messages are stored in a circular buffer, so large amounts of output will be overwritten.

You'll get the output with the command dmesg

dmesg outputs all the messages from the kernel. Finding your desired messages would be difficult. Better use dmesg and grep combination and use a driver specific label in all your printk messages. That will ease in eliminating all the unwanted messages.
printk("test: hello world")
dmesg | grep test

I had this problem on Ubuntu 11.10 and 10.04 LTS, on the former I edited /etc/rsyslog.d/50-default.conf, then restarted rsyslog using "sudo service rsyslog restart" to restart rsyslogd. Then it worked.
Note that Ubuntu uses *r*syslogd, not syslogd.

You might try a higher level than KERN_DEBUG, for example KERN_INFO. Depending on your configuration the lowest priority messages might not be displayed.

In centos (Atleast in centos 6.6) the output will be in /var/log/messages

Related

How to investigate which process causes wakeups during laptop sleep-mode in MacOS (or Linux)?

My MacBook spontaneously wakes up from sleep mode with high fan activity.
I want to do a investigate this in RTC or power settings? Or by strace-ing of processes, etc (using some process/kernel magic!).
Hint: It is probably managed by "rtcwake".
I am not even sure if this is a scheduled task, or from a WiFi wakeup, or something else.
I don't want guesses about what usually causes this in Mojave, etc. Instead:
I need to do a systematic investigation on this on my MacOS (Mojave). Linux-related answers are also appreciated.
This is about system standby, sleep-mode, suspended mode. (Note that this is not about standup and wakeup of individual processes. The whole laptop turns on spontaneously.)
Reading the log file is the best way to debug the problem.
So, try this command in your Terminal to fetch the system logs,
this will tell you "wake up" history.
log show --style syslog | fgrep "Wake reason: EC.LidOpen"
To see the wake reason:
For macOS Sierra, Mojave, Catalina, and newer
log show |grep -i "Wake reason"
Or for MacOS El Capitan, Yosemite, Mavericks, and older
syslog |grep -i "Wake reason"
This will look like:
MacBookPro kernel[0] : Wake reason = OHC1
MacBookPro kernel[0] : Wake reason = PWRB
MacBookPro kernel[0] : Wake reason = EHC2
MacBookPro kernel[0] : Wake reason = OHC1
So what do these wake reason codes mean?
OHC: stands for Open Host Controller, is usually USB or Firewire. If you see OHC1 or OHC2 it is almost certainly an external USB keyboard or mouse that has woken up the machine.
EHC: standing for Enhanced Host Controller, is another USB interface, but can also be wireless devices and bluetooth since they are also on the USB bus of a Mac.
USB: a USB device woke the machine up
LID0: this is literally the lid of your MacBook or MacBook Pro when you open the lid the machine wakes up from sleep.
PWRB: PWRB stands for Power Button, which is the physical power button on your Mac
RTC: Real Time Clock Alarm, is generally from wake-on-demand services like when you schedule sleep and wake on a Mac via the Energy Saver control panel. It can also be from launchd setting, user applications, backups, and other scheduled events.
There may be some other codes (like PCI, GEGE, etc) but the above are the ones that most people will encounter in the system logs. Once you find out these codes, you can really narrow down what is causing your Mac to wake up from sleep seemingly at random.
Hope this will help :)
This answer is based on Linux, so it might not apply strictly to Mac.
To determine whether rtcwake is responsible for your MacOS wakeups, you could replace the executable (in my Ubutnu it is /usr/sbin/rtcwake) with a wrapper script that leaves a sign of rtcwake having run, e.g.
$ cd /usr/sbin/rtcwake
$ sudo mv rtcwake rtcwake_orig
and then write script /usr/sbin/rtcwake containing
#!/bin/bash
touch $HOME/rtcwake_ran
/usr/sbin/rtcwake_orig
Variants of the script would depend on your shell.
In particular, in the last line you would possibly run rtcwake in some alternative way, so as to not own the process (nohup / disown).
See https://unix.stackexchange.com/questions/152310/how-to-correctly-start-an-application-from-a-shell
To inspect possible causes of wakeup, you can check various relevant logs, at /var/log.
E.g., syslog*, acpi*.
See also https://unix.stackexchange.com/questions/83036/where-is-the-log-for-acpi-events
Do you have wakeonlan?
Here I am documenting my systematic approach. It is loosely based on, and initiated by, the answer by #vijay-rajpurohit, which is in turn based on comment by #Robert #1431720 . Note that the final result is particular to my MacOS machine, based on the logs shown below. It will be different in your MacOS.
In first attempt, I first checked the logs using: log show --style syslog | grep ... but it is taking too long. I accidentally checked /var/log/wifi.log after exploring the /var/log/ (I am also curious about /var/log/powermanagement/*.asl).
This turned out to be most useful:
cat /var/log/wifi.log|grep -i "Wake reason"
Then found this line: (note the EC. bit)
Thu Apr 23 22:41:32.359 Info: <airportd[219]> _systemWokenByWiFi: System wake reason: <EC.ARPT>, was woken by WiFi
Then googled for EC.ARPT, I found the following commands:
pmset -g log Useful stats about "Total Sleep/Wakes since boot".
pmset -g assertions This turned out to show the full answer to this question:
2020-04-24 02:23:38 +0100
Assertion status system-wide:
BackgroundTask 1
ApplePushServiceTask 0
UserIsActive 1
PreventUserIdleDisplaySleep 0
PreventSystemSleep 0
ExternalMedia 0
PreventUserIdleSystemSleep 0
NetworkClientActive 0
Listed by owning process:
pid 111(hidd): [0x0000200a000986a9] 00:00:00 UserIsActive named: "com.apple.iohideventsystem.queue.tickle.4295010950.3"
pid 85(apsd): [0x0003b830000b90bd] 00:00:10 ApplePushServiceTask named: "com.apple.apsd-waitingformessages-push.apple.com"
Kernel Assertions: 0x100=MAGICWAKE
id=504 level=255 0x100=MAGICWAKE mod=24/04/2020, 01:57 description=en0 owner=en0
Idle sleep preventers: IODisplayWrangler
In short, in a systematic approach, I explored the following keywords based on the logs, and googled each :
EC.ARPT (example link)
iohideventsystem (example link)
MAGICWAKE (example link)
ApplePushServiceTask (see below)
Most informative item emerged from the output of pmset -g assertions. For example ApplePushServiceTask in the following line:
pid 85(apsd): [0x0003b830000b90bd] 00:00:10 ApplePushServiceTask named: "com.apple.apsd-waitingformessages-push.apple.com"
The solution that seems to work in my particular case (not a general solution) was to disable :
/System/Library/LaunchDaemons/com.apple.apsd.plist using launchctl. But this cannot be done until you do a csrutil disable in the safe mode. I don't write instructions here because it need caution and you need to enable it later.
(to be updated)

How can I show printk() message in console?

The information which is printed by printk() can only be seen under Alt+Ctrl+F1 ~ F7 console.
These consoles are very inconvenient for debugging since they can't roll back. I am using KDE desktop environment and console terminal, how could I redirect the printk() message to console?
The syntax of printk is
printk ("log level" "message", <arguments>);
kernel defines 8 log levels in the file printk.h
#define KERN_EMERG "<0>" /* system is unusable*/
#define KERN_ALERT "<1>" /* action must be taken immediately*/
#define KERN_CRIT "<2>" /* critical conditions*/
#define KERN_ERR "<3>" /* error conditions*/
#define KERN_WARNING "<4>" /* warning conditions*/
#define KERN_NOTICE "<5>" /* normal but significant condition*/
#define KERN_INFO "<6>" /* informational*/
#define KERN_DEBUG "<7>" /* debug-level messages*/
Each log level corresponds to a number and the lower the number higher the importance of the message.
The levels are useful in deciding what should be displayed to the user on the console and what should not be.
Every console has log level called as the console log level and any message with a log level number lesser than the console log level gets displayed on the console, and other messages which have a log level number higher or equal to the console log level are logged in the kernel log(kernel buffer) which can be looked into using the command "dmesg".
The console loglevel can be found by looking into the file /proc/sys/kernel/printk
$ cat /proc/sys/kernel/printk
4 4 1 7
The first number in the output is the console log level, the second is the default log level, third is the minimum log level and fourth is the maximum log level.
Log level 4 corresponds to KERN_WARNING. Thus all the messages with log levels 3,2,1 and 0 will get displayed on the screen as well as logged and the messages with log level 4,5,6,7 only get logged and can be viewed using "dmesg".
The console log level can be changed by writing into the proc entry
$ echo "6" > /proc/sys/kernel/printk
$ cat /proc/sys/kernel/printk
6 4 1 7
Now the console log level is set to 6, which is KERN_INFO.
Here you want to print out every message so you should set your console level at highest number "8"
echo "8" > /proc/sys/kernel/printk
tail -f /var/log/kern.log &
or
cat /proc/kmsg & (Android Environment)
Use
dmesg -wH &
to force all your kernel messages, that are printed to dmesg (and also the virtual terminals like Ctrl+Alt+F1 , depending on your /proc/sys/kernel/printk log level and a level of your message), to also appear at your SSH or GUI console: Konsole, Terminal or whatever you are using! And, if you need to monitor only for the specific messages:
dmesg -wH | grep ERR &
I'm using it to monitor for the "ERROR" messages like
printk(KERN_EMERG "ERROR!\n");
that I printk from my driver
printk() is a function provided by the Linux kernel to print debug/information/error messages. Internally, the kernel maintains a circular buffer that is __LOG_BUF_LEN bytes long (depending on the configuration, it can range from 4KB to 1MB).
There are 8 possible loglevels associated to messages and defined in linux/kernel.h:
KERN_EMERG: Emergency (system is unusable)
KERN_ALERT: Serious problem (i.e. action must be taken immediately)
KERN_CRIT: Critical condition, usually related to hardware or software failure
KERN_ERR: Used for error conditions, usually related to hardware difficulties
KERN_WARNING: Used to warn about problematic situations that are not serious
KERN_NOTICE: Normal situations that require notification
KERN_INFO: Informational messages; many drivers print information about the hardware found
KERN_DEBUG: Used only for debugging
Each string represents a number ranging from 0 to 7, with smaller values representing higher priorities. The default log level is equal to the
DEFAULT_MESSAGE_LOGLEVEL variable specified in kernel/printk/printk.c.
How messages can be read from user-level depends both on the configuration of some user-level daemons (e.g., klogd and syslogd) and on the default loglevel. To answer your question, depending on your specific configuration, one or more of the following commands will allow you to read the output of printk:
The dmesg console command (usually, the preferred way for one-shot manual checking)
The tail -f /var/log/kern.log command
Through /proc/kmsg (discouraged)
Depending on your configuration, you may also want to change the default loglevel shown in console. Starting from klogd 2.1.31, the default loglevel can be changed by echoing into /proc/sys/kernel/printk. Examples:
echo 5 > /proc/sys/kernel/printk will display on console only messages with loglevel from 0 to 4
echo 8 > /proc/sys/kernel/printk will display on console messages with any loglevel

syslog: process specific priority

I have two user processes A and B. Both use syslog using facility LOG_USER.
I want to have different threshold levels for them:
For A, only messages of priority ERR-and-above must be logged
For B, only messages of priority CRIT-and-above must be logged
I found that if I setup /etc/syslog.conf as
user.err /var/log/messages
then messages of ERR-and-above are logged, but, from both A and B.
How can I have different minimum threshold levels for different processes?
Note: I am exploring if there is a config file based solution. Otherwise, there is another approach that works. In each process, we can use setlogmask() to install process specific priority mask.
EDIT (Nov 18): I want to use syslog and some portable solution.
A config file based solution is available. I think CentOS by default ships with rsyslog and even if it does not, you can always install rsyslog with yum. This solution works only with rsyslog and nothing else.
The is a catch, though. You can not separate log messages with rsyslog (or pretty much any syslog daemon implementation) between processes with same name ie. the same executable path. However, rsyslog does allow you to filter messages based on program name. Here lies a possible solution: most programs call openlog(3) using argv[0], ie. the executable name, as the first argument. Now since you don't reveal the actual program you're running, there is no way to determine this for you, but you can always read the sources of your own program, I guess.
In most cases the executable path is the program name, though some daemons do fiddle with argv[0] (notable examples are postfix and sendmail). Rsyslog on the other hand provides a filtering mechanism which allows one to filter messages based on the name of the sending program (you can now probably see how this is all connected to how openlog(3) is called). So, instead of trying to filter directly processes, we can do filtering on program names. And that we can affect by creating symbolic links.
So, this solution only works given following conditions: a) the process you're running does not fiddle with argv[0] after beginning execution; b) you can create symlinks to the binary, thus creating two different names for the same program; c) your program is calling openlog(3) using argv[0] as the first parameter to the call.
Given those two conditions, you can simply filter messages on /etc/rsyslog.conf like this (example directly from rsyslog documentation):
if $programname == 'prog1' then {
action(type="omfile" file="/var/log/prog1.log")
}
if $programname == 'prog2' then {
action(type="omfile" file="/var/log/prog2.log")
}
E.g. if your program is called /usr/bin/foobar and you've created symbolic links /usr/bin/prog1 and /usr/bin/prog2 both pointing at /usr/bin/foobar, the above configuration file example will then direct messages from processes started as "prog1" and "prog2" to different log files respectively. This example will not fiddle with anything else, so all those messages are still going to general log files, unless you filter them out explicitly.
This tutorial http://www.freebsd.org/cgi/man.cgi?query=syslog.conf&sektion=5 helped me. The following seem to work:
# process A: log only error and above
!A
*.err /var/log/messages
# process B: log only critical and above
!B
*.critical /var/log/messages
# all processes other than A and B: log only info and above
!-A,B
*.info /var/log/messages

Cygwin top command - See processes for all users

Does anybody know how to see the processes for all users using top command in Cygwin (part of procps library under System).
I know this can be done in *nix but I am struggling in Cygwin. I have tried using pslist but it does not behave in a putty SSH console.
I need to have a solution where I can see a top like dialog using SSH. I do not have any NTLM SSO access to the Win2k3 guest at all so ssh is the only way in.
top only displays Cygwin processes. ps -W will list Windows processes as well.
Manytimes the command "tasklist" gets the job done more effectively. It built into windows, just make sure your System32 folder is part of your bash profile PATH. There is also procps itself. You should also try using mintty for your terminal. You could always try attaching any of these task apps to screen, and or using watch to poll the information.
It seems you can do something like:
wmic process get ProcessId,Name,UserModeTime,KernelModeTime /EVERY:1
The User and Kernel mode times there seem to be expressed in 1/10,000,000th of second.
You should be able to post-process that output to get the CPU-usage per second.
Here using cygwin's perl:
wmic process get ProcessId,Name,UserModeTime,KernelModeTime /EVERY:1 |
perl -lne '
if (/\S/) {
my ($k,$c,$p,$u) = split /\s{2,}/;
$n{"$p\t$c"}=$k+$u;
} else {
my %c;
for my $k (keys %n) {
$c{$k} = $n{$k} - $o{$k} if defined $o{$k}
}
print "$_\t" . $c{$_}/1e5 for (sort {$c{$b}<=>$c{$a}} keys %c)[0..20];
%o = %n; %n = undef; print ""
}'
Outputs something like:
0 System Idle Process 588.12377
2196 sh.exe 107.00075
248 svchost.exe 85.80055
7140 explorer.exe 26.52017
[...]
every second.
Note that if the System Idle Process shows just under 800% on an idle system, that's because your system has 8 CPU cores (well at least 8 threads) as that counts the CPU time of all CPUs.
Also note that the EVERY:1 above is a lie. wmic doesn't seem to give that output every second. More likely, it sleeps roughly 1 second between each report and doesn't compensate for the time it takes to compute the report. So in practice, it will run every 1 second and a bit which means those percentages are not very accurate and slightly overestimated.

Can syslog Performance Be Improved?

We have an application on Linux that used the syslog mechanism. After a week spent trying to figure out why this application was running slower than expected, we discovered that if we eliminated syslog, and just wrote directly to a log file, performance improved dramatically.
I understand why syslog is slower than direct file writes. But I was wondering: Are there ways to configure syslog to optimize its performance?
You can configure syslogd (and rsyslog at least) not to sync the log files after a log message by prepending a "-" to the log file path in the configuration file. This speeds up performance at the expense of the danger that log messages could be lost in a crash.
There are several options to improve syslog performance:
Optimizing out calls with a macro
int LogMask = LOG_UPTO(LOG_WARNING);
#define syslog(a, ...) if ((a) & LogMask ) syslog((a), __VA_ARGS__)
int main(int argc, char **argv)
{
LogMask = setlogmask(LOG_UPTO(LOG_WARNING));
...
}
An advantage of using a macro to filter syslog calls is that the entire call is
reduced to a conditional jump on a global variable, very helpful if you happen to
have DEBUG calls which are translating large datasets through other functions.
setlogmask()
setlogmask(LOG_UPTO(LOG_LEVEL))
setlogmask() will optimize the call by not logging to /dev/log, but the program will
still call the functions used as arguments.
filtering with syslog.conf
*.err /var/log/messages
"check out the man page for syslog.conf for details."
configure syslog to do asynchronous or buffered logging
metalog used to buffer log output and flushed it in blocks. stock syslog and syslog-ng
do not do this as far as I know.
Before embarking in new daemon writing you can check if syslog-ng is faster (or can be configured to be faster) than plain old syslog.
One trick you can use if you control the source to the logging application is to mask out the log level you want in the app itself, instead of in syslog.conf. I did this years ago with an app that generated a huge, huge, huge amount of debug logs. Rather than remove the calls from the production code, we just masked so that debug level calls never got sent to the daemon. I actually found the code, it's Perl but it's just a front to the setlogmask(3) call.
use Sys::Syslog;
# Start system logging
# setlogmask controls what levels we're going to let get through. If we mask
# them off here, then the syslog daemon doesn't need to be concerned by them
# 1 = emerg
# 2 = alert
# 4 = crit
# 8 = err
# 16 = warning
# 32 = notice
# 64 = info
# 128 = debug
Sys::Syslog::setlogsock('unix');
openlog($myname,'pid,cons,nowait','mail');
setlogmask(127); # allow everything but debug
#setlogmask(255); # everything
syslog('debug',"syslog opened");
Not sure why I used decimal instead of a bitmask... shrug
Write your own syslog implementation. :-P
This can be accomplished in two ways.
Write your own LD_PRELOAD hook to override the syslog functions, and make them output to stderr instead. I actually wrote a post about this many years ago: http://marc.info/?m=97175526803720 :-P
Write your own syslog daemon. It's just a simple matter of grabbing datagrams out of /dev/log! :-P
Okay, okay, so these are both facetious answers. Have you profiled syslogd to see where it's choking up most?
You may configure syslogd's level (or facility) to log asynchronously, by putting a minus before path to logfile (ie.: user.* [tab] -/var/log/user.log).
Cheers.
The syslog-async() implementation may help, at the risk of lost log lines / bounded delays at other times.
http://thekelleys.org.uk/syslog-async/
Note: 'asynchronous' here refers to queueing log events within your application, and not the asynchronous syslogd output file configuration option that other answers refer to.

Resources