Checking if coredumps are disabled for current process - linux

Is there an easy way to check whether the current process has coredumps enabled, so a library can instead return an error before loading encryption keys?

You can use the ulimit command/system call to check the core dump size. If it is zero, no core dumps. Please note that the manual page suggest using getrlimit() and setrlimit() instead of ulimit()

Related

Core files generated by linux kernel modules

I am trying to load a kernel module (out-of-tree) and dmesg shows a panic. The kernel is still up though. I guess the module panic'd.
Where to find the core file? I want to use gdb and see whats the problem.
Where to find the core file?
Core files are strictly a user-space concept.
I want to use gdb and see whats the problem.
You may be looking for KGDB and/or Kdump/Kexec.
Normally, whenever the coredump was generated, it will state "core dumped". This could be one high level easy way to confirm whether coredump got generated however, this statement alone cannot guarantee on coredump file availability. The location where coredump is generated is specified through core_pattern to kernel via sysctl. You need to check the information present in core_pattern of your system. Also, note that in case of Ubuntu, it appears that the coredump file size is kept as zero by default which will avoid generation of coredump. So, you might need to check the corefile size ulimit and change it to 'ulimit -c unlimited', if it is zero. The manpage http://man7.org/linux/man-pages/man5/core.5.html explains about various reasons due to which coredump shall not get generated.
However, from your explanation, it appears that you are facing 'kernel oops' as the kernel is still up(unstable state) even though a particular module got panic'd/killed. In such cases, kernel shall print an oops message. Refer to link https://www.kernel.org/doc/Documentation/oops-tracing.txt that has information regarding the kernel oops messages.
Abstract from the link: Normally the Oops text is read from the kernel buffers by klogd and
handed to syslogd which writes it to a syslog file, typically
/var/log/messages (depends on /etc/syslog.conf). Sometimes klogd
dies, in which case you can run dmesg > file to read the data from the
kernel buffers and save it. Or you can cat /proc/kmsg > file, however
you have to break in to stop the transfer, kmsg is a "never ending
file".
printk is used for generating the oops messages. printk does tagging of severity by means of different loglevels /priorities and allows the classification of messages according to their severity. (Different priorities are defined in file linux/kernel.h or linux/kern_levels.h, in form of macros like KERN_EMERG, KERN_ALERT, KERN_CRIT etc..)So, you may need to check the default logging levels in system by using cat /proc/sys/kernel/printk and change it as per your requirement. Also, check whether the logging daemons are up and incase you want to debug kernel, ensure that the kernel is compiled with CONFIG_DEBUG_INFO.
The method to use GDB to find the location where the kernel panicked or oopsed in ubuntu is in the link https://wiki.ubuntu.com/Kernel/KernelDebuggingTricks which can be one of the method that can be used by you for debugging kernel oops.
there won't be a core file.
You should follow the stack trace in kernel messages. type dmesg to see it.

Core dump is created, but not written to a file?

I'm trying to get a core dump of a proprietary application running on an embedded linux system, for which I wrote some plugins.
What I did was:
ulimit -c unlimited
echo "/tmp/cores/core.%e.%p.%h.%t" > /proc/sys/kernel/core_pattern
kill -3 <PID>`
However, no core dump is created. '/tmp/cores' exists and is writable for everyone, and the disk has enough space available. When I try the same thing with sleep 100 & as an example process and then kill it, the core dump is created.
I tried the example for the pipe syntax from the core manpage, which writes some parameters and the size of the core dump into a file called core.info. This file IS created, and the size is greater than 0. So if the core dump is created, why isn't it written to /tmp/cores? To be sure, I also searched for core* on the file system - it's not there. dmesg doesn't show any errors (but it does if I pipe the core dump to an invalid program).
Some more info: The system is probably based on Debian, but I'm not quite sure. GDB is not available, as well as many other tools - there is only busybox for basic stuff.
The process I'm trying to debug is automatically restarted soon after being killed.
So, I guess one solution would be to modify the example program in order to write the dump to a file instead of just counting bytes. But why doesn't it work just normally if there obviously is some data?
If your proprietary application calls setrlimit(2) with RLIMIT_CORE set to 0, or if it is setuid, no core dump happens. See core(5). Perhaps use strace(1) to find out. And you could install  gdb (perhaps by [cross-] compiling it). See also gcore(1).
Also, check (and maybe set) the limit in the invoking shell. With bash(1) use ulimit builtin. Otherwise, cat /proc/self/limits should display the limits. If you don't have bash you could code a small wrapper in C calling setrlimit then execve ...

How do I ensure my Linux program doesn't produce core dumps?

I've got a program that keeps security-sensitive information (such as private keys) in memory, since it uses them over the lifetime of the program. Production versions of this program set RLIMIT_CORE to 0 to try to ensure that a core dump that might contain this sensitive information is never produced.
However, while this isn't mentioned in the core(8) manpage, the apport documentation on the Ubuntu wiki claims,
Note that even if ulimit is set to disabled core files (by specyfing a
core file size of zero using ulimit -c 0), apport will still capture
the crash.
Is there a way within my process (i.e., without relying on configuration of the system external to it) that I can ensure that a core dump of my process is never generated?
Note: I'm aware that there are plenty of methods (such as those mentioned in the comments below) where a user with root or process owner privileges could still access the sensitive data. What I'm aiming at here is preventing unintentional exposure of the sensitive data through it being saved to disk, being sent to the Ubuntu bug tracking system, or things like that. (Thanks to Basile Starynkevitch for making this explicit.)
According to the POSIX spec, core dumps only happen in response to signals whose action is the default action and whose default action is to "terminate the process abnormally with additional actions".
So, if you scroll down to the list in the description of signal.h, everything with an "A" in the "Default Action" column is a signal you need to worry about. Use sigaction to catch all of them and just call exit (or _exit) in the signal handler.
I believe these are the only ways POSIX lets you generate a core dump. Conceivably, Linux might have other "back doors" for this purpose; unfortunately, I am not enough of a kernel expert to be sure...

Can I set the Linux coredump file format for current current process only?

Some help file introduce to use sysctl, but it would changed the format globally and need admin permission to do, is there any API or a way to change the dump format for current process only in code?
No. The core format is not specified in the process table, and hence it is not possible to change it per-process. Use the pipe core handler to filter the cores and handle them as desired. See the core(5) man page for details.

What is a good way to dump a Linux core file from inside a process?

We have a server (written in C and C++) that currently catches a SEGV and dumps some internal info to a file. I would like to generate a core file and write it to disk at the time we catch the SEGV, so our support reps and customers don't have to fuss with ulimit and then wait for the crash to happen again in order to get a core file. We have used the abort function in the past, but it is subject to the ulimit rules and doesn't help.
We have some legacy code that reads /proc/pid/map and manually generates a core file, but it is out of date, and doesn't seem very portable (for example, I'm guessing it would not work in our 64 bit builds). What is the best way to generate and dump a core file in a Linux process?
Google has a library for generating coredumps from inside a running process called google-coredumper. This should ignore ulimit and other mechanisms.
The documentation for the call that generates the core file is here. According to the documentation, it seems that it is feasible to generate a core file in a signal handler, though it is not guaranteed to always work.
I saw pmbrett's post and thought "hey, thats cool" but couldn't find that utility anywhere on my system ( Gentoo ).
So I did a bit of prodding, and discovered GDB has this option in it.
gdb --pid=4049 --batch -ex gcore
Seemed to work fine for me.
Its not however very useful because it traps the very lowest function that was in use at the time, but it still does a good job outside that ( With no memory limitations, Dumped 350M snapshot of a firefox process with it )
Try using the Linux command gcore
usage: gcore [-o filename] pid
You'll need to use system (or exec) and getpid() to build up the right command line to call it from within your process
Some possible solutions^W ways of dealing with this situation:
Fix the ulimit!!!
Accept that you don't get a core file and run inside gdb, scripted to do a "thread all apply bt" on SIGSEGV
Accept that you don't get a core file and acquired a stack trace from within the application. The Stack Backtracing Inside Your Program article is pretty old but it should be possible these days too.
You can also change the ulimit() from within your program with setrlimit(2). Like the ulimit shell command, this can lower limits, or raise them as hard as the hard limit allows. At startup setrlimit() to allow core dumping, and you're fine.
I assume you have a signal handler that catches SEGV, for example, and does something like print a message and call _exit(). (Otherwise, you'd have a core file in the first place!) You could do something like the following.
void my_handler(int sig)
{
...
if (wantCore_ && !fork()) {
setrlimit(...); // ulimit -Sc unlimited
sigset(sig, SIG_DFL); // reset default handler
raise(sig); // doesn't return, generates a core file
}
_exit(1);
}
use backtrace and backtrace_symbols glibc calls to get the trace, just keep in mind that backtrace_symbols uses malloc internally and in case of heap corruption it might fail.
system ("kill -6 ")
I'd give it a try if you are still looking for something

Resources