Debug messages not being printed on the console - linux

I am trying to enable printing the debug messages on the console.
#include <linux/kernel.h>
#include <linux/module.h>
MODULE_LICENSE("GPL");
static int test_hello_init(void)
{
printk(KERN_INFO"%s: In init\n", __func__);
return 0;
}
static void test_hello_exit(void)
{
printk(KERN_INFO"%s: In exit\n", __func__);
}
module_init(test_hello_init);
module_exit(test_hello_exit);
To get the Info messages on the console, i executed the following command: dmesg -n7
cat /proc/sys/kernel/printk
7 4 1 7
When I load the module using insmod, i don't get any message on the terminal, while it is available when I type dmesg. What mistake i am making here.

Messages from kernel are not printed on terminal (unless it's specified as console= in kernel cmdline). They are appended to kernel log, which exists in kernel. It's accessible to user space programs via device file /dev/kmsg. This file is read by dmesg command in order to print kernel log content on terminal.

Related

Why can GDB mask tracee's SIGKILL when attaching to the tracee

The signal(7) man page says that SIGKILL cannot be caught, blocked, or ignored. But I just observed that after attaching to a process with GDB, I can no longer send SIGKILL to that process (similarly, other signal cannot be delivered either). But after I detach and quit GDB, SIGKILL is delivered as usual.
It seems to me that GDB has blocked that signal (on behalf of the tracee) when attaching, and unblocked it when detaching. However, the ptrace(2) man page says:
While being traced, the tracee will stop each time a signal is delivered, even if the signal is being ignored. (An exception is SIGKILL, which has its usual effect.)
So why does it behave this way? What tricks is GDB using?
Here is an trivial example for demonstration:
1. test program
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <signal.h>
#include <errno.h>
#include <string.h>
/* Simple error handling functions */
#define handle_error_en(en, msg) \
do { errno = en; perror(msg); exit(EXIT_FAILURE); } while (0)
struct sigaction act;
void sighandler(int signum, siginfo_t *info, void *ptr) {
printf("Received signal: %d\n", signum);
printf("signal originate from pid[%d]\n", info->si_pid);
}
int
main(int argc, char *argv[])
{
printf("Pid of the current process: %d\n", getpid());
memset(&act, 0, sizeof(act));
act.sa_sigaction = sighandler;
act.sa_flags = SA_SIGINFO;
sigaction(SIGQUIT, &act, NULL);
while(1) {
;
}
return 0;
}
If you try to kill this program using SIGKILL (i.e., using kill -KILL ${pid}), it will die as expected. If you try to send it SIGQUIT (i.e., using kill -QUIT ${pid}), those printf statements get executed, as expected. However, if you have attached it with GDB before sending it signal, nothing will happen:
$ ##### in shell 1 #####
$ gdb
(gdb) attach ${pid}
(gdb)
/* now that gdb has attached successfully, in another shell: */
$ #### in shell 2 ####
$ kill -QUIT ${pid} # nothing happen
$ kill -KILL ${pid} # again, nothing happen!
/* now gdb detached */
##### in shell 1 ####
(gdb) quit
/* the process will receive SIGKILL */
##### in shell 2 ####
$ Killed # the tracee receive **SIGKILL** eventually...
FYI, I am using a CentOS-6u3 and uname -r result in 2.6.32_1-16-0-0. My GDB version is: GNU gdb (GDB) Red Hat Enterprise Linux (7.2-56.el6) and my GCC version is: gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-19.el6). An old machine...
Any idea will be appreciated ;-)
$ ##### in shell 1 #####
$ gdb
(gdb) attach ${pid}
(gdb)
The issue is that once GDB has attached to ${pid}, the inferior (being debugged) process is no longer running -- it is stopped.
The kernel will not do anything to it until it is either continued (with the (gdb) continue command), or it is no longer being traced ((gdb) detach or quit).
If you issue continue (either before or after kill -QUIT), you'll see this:
(gdb) c
Continuing.
kill -QUIT $pid executed in another shell:
Program received signal SIGQUIT, Quit.
main (argc=1, argv=0x7ffdcc9c1518) at t.c:35
35 }
(gdb) c
Continuing.
Received signal: 3
signal originate from pid[123419]
kill -KILL executed in another window:
Program terminated with signal SIGKILL, Killed.
The program no longer exists.
(gdb)

How to access Parallel port in Linux

On my Linux machine (Debian Wheezy), I tried to access the parallel port by request_region() but it failed because the system had already loaded the kernel module parport...
So, I rmmod the modules lp, ppdev, parport_pc and parport. Then, I could successfully insert my module.
However, from the base address inb() returned 0xff, no matter what value was written.
Before rmmod those module from kernel, I could wrote and read this register. Then I blacklisted those module from being loaded at system start up, and I could read and write these registers and my module also worked. It seems that the clearup function of parport_pc did something that made the hardware unusable. (At least the status of the port is not the same as it was before the module loaded).
My question is why, and what should I do to recover the port instead of reload parport_pc ?
You can use C to write a small program that will read and write directly from/to the pins on the parallel port by way of the outb and inb functions. Then, you can simply call the C program from the command line of shelling from some other script. Usually, (by default) address 0x378 is the address of the parallel port LPT0 in memory, so you it's just a matter of using inb and outp to read/write to this address. For example:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <asm/io.h>
#define base 0x378 //LPT0
//to compile: gcc -O parport.c -o parport
//after compiling, set suid: chmod +s parport then, copy to /usr/sbin/
int main(void) {
if(ioperm(base,1,1))
fprintf(stderr, "Couldn't open parallel port"), exit(1);
outb(255,base); //set all pins hi
sleep(5);
outb(0,base); //set all pins lo
return 0;
}
Some driver mods have blocked your access to parallel port.
Edit the /etc/modprobe.d/blacklist.conf file and add the following lines, then reboot your linux.
blacklist ppdev
blacklist lp
blacklist parport_pc
blacklist parport
And if cups is installed, you should modify /etc/modules-load.d/cups-filters.conf:
#lp
#ppdev
#parport_pc
Here is some details:
https://stackoverflow.com/a/27423675/4350106

How to see the daemon process's output in Linux?

I wrote a test.c:
#include <unistd.h>
#include <stdio.h>
int main()
{
while(1)
{
sleep(1);
printf("====test====\r\n");
}
return 0;
}
then i compile it : gcc ./test.c -o ./test
and the i wrote a shell script:
#!/bin/sh
./test &
and then i made this script to be executed automatically on system boot.
then I login to the Linux system using secureCRT in SSH protocol.
using "ps aux | grep test" i can see the test process running,
but i just cannot see the test's output, some people told me because the test
output to tty, and i am using pts.
could anybody tell me the specific reason and how can i get the output?
thanks in advance!
It doesn't output anything because it got no terminal attached.
If you want your output to be visible to every terminal connected to the system, use wall
./test | wall
(it will be very annoying)
I suggest you to redirect the output to a log file.

Why are kernel log messages in the syslog file (or those redirected to terminal) exactly one 'message' behind?

I wrote a simple hello-world module on Ubuntu 10.04 machine. When loading and unloading the module, printk should log messages from the following hello_world and bye_world functions.
static int hello_world()
{
printk(KERN_INFO "Hello, beautiful world");
return 0;
}
static void bye_world()
{
printk(KERN_INFO "Good-bye kernel uncle");
}
module_init(hello_world);
module_exit(bye_world);
However, when actually inserting and removing the hello.ko module, I see that the message printed on the terminal (I redirected it to the current terminal) is exactly one message behind.
# rmmod hello.ko
# Jul 8 16:34:02 panchavati kernel: [64599.954113] Hello, beautiful world
# insmod hello.ko
# Jul 8 16:34:57 panchavati kernel: [65456.367422] Good-bye kernel uncle
From dmesg output, I can see the next message entry (corresponding to insmod) has been already logged, just that it is not being printed yet.
chanakya#panchavati:~$ dmesg | tail -2
[65456.367422] Good-bye kernel uncle
[65511.198299] Hello, beautiful world
The entry [65511.198299] is there in the kernel log, but only the previous entry [65456.367422] Good-bye kernel uncle was printed. Why is it so?
I earlier thought that '-' specifier used in /etc/syslog.conf might have to do something with this (it prevents synchronization after every write), but removing it didn't work either.
Have you tried using tailf (or tail -f)? It will print out the messages as they arrive. For example:
tailf /var/log/messages
Try ending your strings with a newline ("\n").

How to generate a core dump in Linux on a segmentation fault?

I have a process in Linux that's getting a segmentation fault. How can I tell it to generate a core dump when it fails?
This depends on what shell you are using. If you are using bash, then the ulimit command controls several settings relating to program execution, such as whether you should dump core. If you type
ulimit -c unlimited
then that will tell bash that its programs can dump cores of any size. You can specify a size such as 52M instead of unlimited if you want, but in practice this shouldn't be necessary since the size of core files will probably never be an issue for you.
In tcsh, you'd type
limit coredumpsize unlimited
As explained above the real question being asked here is how to enable core dumps on a system where they are not enabled. That question is answered here.
If you've come here hoping to learn how to generate a core dump for a hung process, the answer is
gcore <pid>
if gcore is not available on your system then
kill -ABRT <pid>
Don't use kill -SEGV as that will often invoke a signal handler making it harder to diagnose the stuck process
To check where the core dumps are generated, run:
sysctl kernel.core_pattern
or:
cat /proc/sys/kernel/core_pattern
where %e is the process name and %t the system time. You can change it in /etc/sysctl.conf and reloading by sysctl -p.
If the core files are not generated (test it by: sleep 10 & and killall -SIGSEGV sleep), check the limits by: ulimit -a.
If your core file size is limited, run:
ulimit -c unlimited
to make it unlimited.
Then test again, if the core dumping is successful, you will see “(core dumped)” after the segmentation fault indication as below:
Segmentation fault: 11 (core dumped)
See also: core dumped - but core file is not in current directory?
Ubuntu
In Ubuntu the core dumps are handled by Apport and can be located in /var/crash/. However, it is disabled by default in stable releases.
For more details, please check: Where do I find the core dump in Ubuntu?.
macOS
For macOS, see: How to generate core dumps in Mac OS X?
What I did at the end was attach gdb to the process before it crashed, and then when it got the segfault I executed the generate-core-file command. That forced generation of a core dump.
Maybe you could do it this way, this program is a demonstration of how to trap a segmentation fault and shells out to a debugger (this is the original code used under AIX) and prints the stack trace up to the point of a segmentation fault. You will need to change the sprintf variable to use gdb in the case of Linux.
#include <stdio.h>
#include <signal.h>
#include <stdlib.h>
#include <stdarg.h>
static void signal_handler(int);
static void dumpstack(void);
static void cleanup(void);
void init_signals(void);
void panic(const char *, ...);
struct sigaction sigact;
char *progname;
int main(int argc, char **argv) {
char *s;
progname = *(argv);
atexit(cleanup);
init_signals();
printf("About to seg fault by assigning zero to *s\n");
*s = 0;
sigemptyset(&sigact.sa_mask);
return 0;
}
void init_signals(void) {
sigact.sa_handler = signal_handler;
sigemptyset(&sigact.sa_mask);
sigact.sa_flags = 0;
sigaction(SIGINT, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGSEGV);
sigaction(SIGSEGV, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGBUS);
sigaction(SIGBUS, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGQUIT);
sigaction(SIGQUIT, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGHUP);
sigaction(SIGHUP, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGKILL);
sigaction(SIGKILL, &sigact, (struct sigaction *)NULL);
}
static void signal_handler(int sig) {
if (sig == SIGHUP) panic("FATAL: Program hanged up\n");
if (sig == SIGSEGV || sig == SIGBUS){
dumpstack();
panic("FATAL: %s Fault. Logged StackTrace\n", (sig == SIGSEGV) ? "Segmentation" : ((sig == SIGBUS) ? "Bus" : "Unknown"));
}
if (sig == SIGQUIT) panic("QUIT signal ended program\n");
if (sig == SIGKILL) panic("KILL signal ended program\n");
if (sig == SIGINT) ;
}
void panic(const char *fmt, ...) {
char buf[50];
va_list argptr;
va_start(argptr, fmt);
vsprintf(buf, fmt, argptr);
va_end(argptr);
fprintf(stderr, buf);
exit(-1);
}
static void dumpstack(void) {
/* Got this routine from http://www.whitefang.com/unix/faq_toc.html
** Section 6.5. Modified to redirect to file to prevent clutter
*/
/* This needs to be changed... */
char dbx[160];
sprintf(dbx, "echo 'where\ndetach' | dbx -a %d > %s.dump", getpid(), progname);
/* Change the dbx to gdb */
system(dbx);
return;
}
void cleanup(void) {
sigemptyset(&sigact.sa_mask);
/* Do any cleaning up chores here */
}
You may have to additionally add a parameter to get gdb to dump the core as shown here in this blog here.
There are more things that may influence the generation of a core dump. I encountered these:
the directory for the dump must be writable. By default this is the current directory of the process, but that may be changed by setting /proc/sys/kernel/core_pattern.
in some conditions, the kernel value in /proc/sys/fs/suid_dumpable may prevent the core to be generated.
There are more situations which may prevent the generation that are described in the man page - try man core.
For Ubuntu 14.04
Check core dump enabled:
ulimit -a
One of the lines should be :
core file size (blocks, -c) unlimited
If not :
gedit ~/.bashrc and add ulimit -c unlimited to end of file and save, re-run terminal.
Build your application with debug information :
In Makefile -O0 -g
Run application that create core dump (core dump file with name ‘core’ should be created near application_name file):
./application_name
Run under gdb:
gdb application_name core
In order to activate the core dump do the following:
In /etc/profile comment the line:
# ulimit -S -c 0 > /dev/null 2>&1
In /etc/security/limits.conf comment out the line:
* soft core 0
execute the cmd limit coredumpsize unlimited and check it with cmd limit:
# limit coredumpsize unlimited
# limit
cputime unlimited
filesize unlimited
datasize unlimited
stacksize 10240 kbytes
coredumpsize unlimited
memoryuse unlimited
vmemoryuse unlimited
descriptors 1024
memorylocked 32 kbytes
maxproc 528383
#
to check if the corefile gets written you can kill the relating process with cmd kill -s SEGV <PID> (should not be needed, just in case no core file gets written this can be used as a check):
# kill -s SEGV <PID>
Once the corefile has been written make sure to deactivate the coredump settings again in the relating files (1./2./3.) !
Ubuntu 19.04
All other answers themselves didn't help me. But the following sum up did the job
Create ~/.config/apport/settings with the following content:
[main]
unpackaged=true
(This tells apport to also write core dumps for custom apps)
check: ulimit -c. If it outputs 0, fix it with
ulimit -c unlimited
Just for in case restart apport:
sudo systemctl restart apport
Crash files are now written in /var/crash/. But you cannot use them with gdb. To use them with gdb, use
apport-unpack <location_of_report> <target_directory>
Further information:
Some answers suggest changing core_pattern. Be aware, that that file might get overwritten by the apport service on restarting.
Simply stopping apport did not do the job
The ulimit -c value might get changed automatically while you're trying other answers of the web. Be sure to check it regularly during setting up your core dump creation.
References:
https://stackoverflow.com/a/47481884/6702598
By default you will get a core file. Check to see that the current directory of the process is writable, or no core file will be created.
Better to turn on core dump programmatically using system call setrlimit.
example:
#include <sys/resource.h>
bool enable_core_dump(){
struct rlimit corelim;
corelim.rlim_cur = RLIM_INFINITY;
corelim.rlim_max = RLIM_INFINITY;
return (0 == setrlimit(RLIMIT_CORE, &corelim));
}
It's worth mentioning that if you have a systemd set up, then things are a little bit different. The set up typically would have the core files be piped, by means of core_pattern sysctl value, through systemd-coredump(8). The core file size rlimit would typically be configured as "unlimited" already.
It is then possible to retrieve the core dumps using coredumpctl(1).
The storage of core dumps, etc. is configured by coredump.conf(5). There are examples of how to get the core files in the coredumpctl man page, but in short, it would look like this:
Find the core file:
[vps#phoenix]~$ coredumpctl list test_me | tail -1
Sun 2019-01-20 11:17:33 CET 16163 1224 1224 11 present /home/vps/test_me
Get the core file:
[vps#phoenix]~$ coredumpctl -o test_me.core dump 16163
This is typically sufficient:
ulimit -c unlimited
Note this will not persist between ssh sections! To add persistence:
echo '* soft core unlimited' >> /etc/security/limits.conf
Now, if you're using Ubuntu, "apport" is probably running. Here's how to check:
sudo systemctl status apport.service
If it is, you'll probably find core dumps in one of these places:
/var/lib/apport/coredump
/var/crash
If you want to change the location of core dumps
Make sure that you have the permissions to create files and the directory exists in the directory you're sending a core dump to!
Here's an example. Note this will not persist across reboots:
sysctl -w kernel.core_pattern=/coredumps/core-%e-%s-%u-%g-%p-%t
mkdir /coredumps
Make sure that the process that's crashing has access to write to this. The easiest way would be an example like this:
chmod 777 /coredumps
Test that core dumps works
> crash.c
gcc -Wl,--defsym=main=0 crash.c
./a.out
==output== Segmentation fault (core dumped)
If it doesn't say "core dumped" above, something isn't working.

Resources