printk() messages not appearing in console - linux

So I'm trying to learn to write Linux modules and right now I'm experimenting with a basic "Hello World" module:
#include <linux/module.h>
#include <linux/init.h>
MODULE_LICENSE("Dual BSD/GPL");
static int hello_init(void){
printk(KERN_ALERT "Hello, world.\n");
return 0;
}
static void hello_exit(void){
printk(KERN_ALERT "goodbye.\n");
}
module_init(hello_init);
module_exit(hello_exit);
And I've finally gotten this module to work! When I add with insmod it prints "hello" to kernel.log and when I remove it with remmod it prints "goodbye" to kernel.log.
My trouble is that I decided I want to try and get the output to also print to the console. From what I understand about printk(), is that in order for messages to show up in the console, the console must be set to the appropriate message level in /proc/sys/kernel/printk. (This is all according to https://elinux.org/Debugging_by_printing). My console is set to level 4.
cat /proc/sys/kernel/printk:
4 4 1 7
Since KERN_ALERT is level 2 and my console is set to print out level 4 and below messages, why are the printk messages not appearing on my console? When I run dmesg I can see the messages are clearly in the buffer, but never go to the console. It's not I really need them to print to the console, but I really want to understand how this all works.

I hope i can answer to your question. I also faced same issue and tried my level best to print kernel message to console, but nothing works. Then I started searching for the reason...
The reason is, If klogd is not running, the message won’t reach user space unless you read /proc/kmsg. Reference: oreilly or /dev/kmsg. klogd reads kernel log messages and helps process and send those messages to the appropriate files, sockets or users. since absence of daemon, it won't send to standard output. Unless you read the message from ring buffer or buffer overflow, it will remains.

Related

How can I create a testing script with GDB

I have a set of inputs that I want to use to test my program so see what input will hit a break point. I want to create a script to test these inputs one by one and if it hit the break point, print or save the result to a file.
Please let me know if it's possible and if yes, how can I do it. Thank you.
I'm not sure if I have exactly understood what you are asking for. But if I understood correctly, you want to write a program that:
Starts another program
Passes some pre-defined input to the other program
Checks if some breakpoint in the other program is hit
I don't know if this is possible using gdb, but it would be possible to write your own debugger:
Start the program to be tested using fork and one of the exec functions (such as execlp)
Before the exec function call ptrace(PTRACE_TRACEME,0,0,0)
Call waitpid; if exec succeeded, the program will be stopped immediately. The "status code" (second argument) returned by waitpid will be 0x57F (assuming an x86 CPU).
If waitpid returns another exit code, exec failed and you cannot continue.
Use ptrace(PTRACE_PEEKTEXT,...) and ptrace(PTRACE_POKETEXT,...) to modify the program: You place a break point to some address by replacing the instruction at that address by a "breakpoint" instruction (on x86 CPUs: int3 which is byte 0xCC)
This means:
You have to know the addresses (not the line numbers) of the break points and write 0xCC to each address using ptrace().
Because PTRACE_POKETEXT can only modify 4 bytes (x86_32) or 8 bytes (x86_64) at once, you first have to read the old values of these 4 or 8 bytes using PTRACE_PEEKTEXT, modify 1 of 4 or 8 bytes and write all 4 or 8 bytes back.
If your program is not always loading to the same address (due to ASLR etc.) you can read the program counter (using PTRACE_PEEKUSER): It should be the (actual) address of the entry point of the program.
Call ptrace(PTRACE_CONT,pid,0,0) to start the program being tested
Call waitpid to wait for the program to be stopped or to exit
If waitpid returns 0x57F as "status code", you are in the breakpoint. You may now use kill(pid, SIGKILL) to stop your program.
You may use PTRACE_PEEKUSER to check the value of the program counter (rip on x86-64) so you know which breakpoint has been hit. Note that the program counter may be the address of the breakpoint plus 1, so if a breakpoint at address 0x12340000 has been hit, rip may be 0x12340001.
If waitpid returns any other value with the low byte 0x7F, the program caused an exception. You should use kill(pid,SIGKILL) to finally stop it.
Otherwise (if the low byte returned by waitpid is not 0x7F), the program has finished without causing an exception and without hitting any breakpoint.
Here some example code:
int pid, code;
long tmpLong;
pid=fork();
if(!pid)
{
ptrace(PTRACE_TRACEME,0,0,0);
execlp("program_to_be_tested","program_to_be_tested",NULL);
exit(123);
}
waitpid(pid,&code,0);
if(code!=0x57F)
{
/* Starting the program failed ... */
}
else
{
/* Set breakpoints - here assuming x86-64 */
tmpLong=ptrace(PTRACE_PEEKDATA,pid,(void *)(address&~7),0);
((char *)&tmpLong)[address&7]=0xCC;
ptrace(PTRACE_POKEDATA,pid,(void *)(address&~7),(void *)tmpLong);
/* Continue the program */
ptrace(PTRACE_CONT,pid,0,0);
waitpid(pid,&code,0);
if((code&0xFF)!=0x7F)
{
/* Program did not hit a breakpoint
* and did not cause an exception */
}
else if(code==0x57F)
{
/* Breakpoint hit */
kill(pid,SIGKILL);
}
else
{
/* Program caused an exception */
kill(pid,SIGKILL);
}
}
To pass input to your program, you have two possible choices:
Run the debugger multiple times:
echo "Input to be tested" | ./myDebugger
Because your debugger does not read from STDIN, the input will be passed to the program to be tested.
Use pipe and dup2 when creating the child process:
...
pipe(pipes);
pid=fork();
if(!pid)
{
dup2(pipes[0],0);
close(pipes[0]);
close(pipes[1]);
...
}
close(pipes[0]);
write(pipes[1],"Input to be sent to program", ...);
...

Printk behaviour change with early_printk enabled

Normally printk does not print any messages before console_init which is present in start_kernel. But with early_printk enabled, printk starts printing messages before console initialization. Now how does this behaviour of printk change since I am still using printk function to print debug messages and not early_printk function. How is this mapping done?
It's not really a mapping. When early_printk is enabled, the same printk() is used as before, just new boot console is being registered in that case, and printk() uses it on early boot stages.
Look at arch/arm/kernel/early_printk.c. You can see that:
new console being registered with register_console() function
that console has CON_BOOT flag, so it's unregistered automatically as soon as real console is registered
printing happens via early_write() function, which in turn uses printch() function, which implemented for each platform separately
Where in kernel source the early_console is disabled after kernel console initialization?
It's done in register_console() function:
if (bcon &&
((newcon->flags & (CON_CONSDEV | CON_BOOT)) == CON_CONSDEV) &&
!keep_bootcon) {
/* We need to iterate through all boot consoles, to make
* sure we print everything out, before we unregister them.
*/
for_each_console(bcon)
if (bcon->flags & CON_BOOT)
unregister_console(bcon);
}
All boot consoles are disabled by unregister_console() function in code above (when real console is being registered).
And where is the real console getting registered?
Real consoles use the same method for registration -- register_console() function. For example:
from my board's defconfig file (arch/arm/configs/omap2plus_defconfig) I can see that my board is using CONFIG_SERIAL_8250 as real console
we can search where register_console() is executed in my serial driver; it's done in univ8250_console_init() function
Is there any way to keep boot consoles up after console initialization and disable real console?
Boot consoles are automatically unregistered only when real console is registered. Following this logic, you just need to disable the real console in order to keep boot console intact.
So what you need to do, is to find out which exactly driver is used for real console in your case. You can do that looking into your .config file, or the *_defconfig file for your board. Once you located it, just disable that driver in configuration and rebuild the kernel.
If after doing so you keep observing the registering of some real console, you need to add some debug printings to register_console(), to figure out what driver is being registered, and then disable it in your configuration.

how to figure out what NL messages are exchanged

Hi Linux kernel/net guru,
I'm looking for a way how to hook and print out NL(netlink) messages between wpa_supplicant and kernel. As of now I just inserted several printk messages to print those but it's very painful I think.
Please let me know if you have a better idea.
Thanks.
This is not a good answer given the OP is using wpa_supplicant specifically but might help people drawn here by accident.
If somebody is using libnl (wpa_supplicant doesn't), all you have to do is, in userspace, once the socket has been initialized,
error = nl_socket_modify_cb(sk, NL_CB_MSG_IN, NL_CB_DEBUG, NULL, NULL);
if (error < 0)
log_err("Could not register debug cb for incoming packets.");
error = nl_socket_modify_cb(sk, NL_CB_MSG_OUT, NL_CB_DEBUG, NULL, NULL);
if (error < 0)
log_err("Could not register debug cb for outgoing packets.");
The userspace client will print all messages whenever it sends or receives them.
(Also, you can alternatively call nl_msg_dump(msg, stderr) whenever you want.)
For stuff that doesn't use libnl, you can always copy the relevant functions from libnl and call them. See nl_msg_dump() in libnl's source code (libnl/lib/msg.c).

difference between console log level and default log lovel

in module programming i read ,
if log level is
less than console log level will get displayed and
higher than will be mentioned in log files ,
if i dont specify any log level in printk statement then the default log level will be taken .
i just saw the default and console log level
by
cat /proc/sys/kernel/printk
and the result was
4 4 1 7
here both default and console was same .
i dont understand why default log level created . are we going to use the default log level in anywhere .
what is the exact difference between console log level and default log level .
i am new to module programming.
As we know we have different kernel level logs:
#define KERN_EMERG "<0>" /* system is unusable*/
#define KERN_ALERT "<1>" /* action must be taken immediately*/
#define KERN_CRIT "<2>" /* critical conditions*/
#define KERN_ERR "<3>" /* error conditions*/
#define KERN_WARNING "<4>" /* warning conditions*/
#define KERN_NOTICE "<5>" /* normal but significant condition*/
#define KERN_INFO "<6>" /* informational*/
#define KERN_DEBUG "<7>" /* debug-level messages*/
Okay let us put for discussion separately:
Console log level is something which is used to set log levels that can be displayed on console window with Log levels(printk) < Console log level(4 taken considering wrt your case).
i.e., it will print kernel messages with printk using log levels from 0,1,2 and 3. rest 4 to 7 will be logged in circular buffer maintained by kernel - can be seen issuing "dmesg".
Now if we move on to Default log level:
Whenever you use printk without any log level info, say for eg:
printk("Insmod my first driver\n"); // this log level will be set to "kern_warning"(as default log level is 4).
So the difference is console log is used to decide what is needed to be printed on console and default log level is used for what log level is to be taken by default if not mentioned by printk during kernel module programming.

Linux SIGPIPE Crashing Server

So at the start of my application I call
signal(SIGPIPE, SIG_IGN);
which I thought would have my application ignore SIGPIPE. However I still got a SIGPIPE crash with the following code:
write(fd, outgoingStr->c_str(), size);
where fd is an int (file descriptor) and size is the size of the string. What am I doing wrong here?
On a side note, I recently use to wrap that write inside an if to check for an error value returned and I believe I never had SIGPIPE crashes until that was removed. The if check did nothing but cout to the console if there was an error, so I'm not sure if it is relevant or not.
The problem ended up being that GDB will stop on SIGPIPE even if it is being ignored. When running the application normally it works as intended.

Resources