How do you read a certain size of memory from an arbitrary address and write it's content to a file? - linux

OS : linux.
I am looking for tools or tips to write code (if and only if necessary) to write the contents of an address to a file for further investigation.
Thanks for any help.

Core dump is a full snapshot of the process memory.
If you have gcore available it will generate you a core dump of a running process without terminating it. Else you may use kill -ABRT to kill the process and generate core dump.
Make sure ulimit -c is set to unlimited (or set it with ulimit -c unlimited).
If you really want only a small segment dumped have a look at this section of GDB manual.

Related

Core files generated by linux kernel modules

I am trying to load a kernel module (out-of-tree) and dmesg shows a panic. The kernel is still up though. I guess the module panic'd.
Where to find the core file? I want to use gdb and see whats the problem.
Where to find the core file?
Core files are strictly a user-space concept.
I want to use gdb and see whats the problem.
You may be looking for KGDB and/or Kdump/Kexec.
Normally, whenever the coredump was generated, it will state "core dumped". This could be one high level easy way to confirm whether coredump got generated however, this statement alone cannot guarantee on coredump file availability. The location where coredump is generated is specified through core_pattern to kernel via sysctl. You need to check the information present in core_pattern of your system. Also, note that in case of Ubuntu, it appears that the coredump file size is kept as zero by default which will avoid generation of coredump. So, you might need to check the corefile size ulimit and change it to 'ulimit -c unlimited', if it is zero. The manpage http://man7.org/linux/man-pages/man5/core.5.html explains about various reasons due to which coredump shall not get generated.
However, from your explanation, it appears that you are facing 'kernel oops' as the kernel is still up(unstable state) even though a particular module got panic'd/killed. In such cases, kernel shall print an oops message. Refer to link https://www.kernel.org/doc/Documentation/oops-tracing.txt that has information regarding the kernel oops messages.
Abstract from the link: Normally the Oops text is read from the kernel buffers by klogd and
handed to syslogd which writes it to a syslog file, typically
/var/log/messages (depends on /etc/syslog.conf). Sometimes klogd
dies, in which case you can run dmesg > file to read the data from the
kernel buffers and save it. Or you can cat /proc/kmsg > file, however
you have to break in to stop the transfer, kmsg is a "never ending
file".
printk is used for generating the oops messages. printk does tagging of severity by means of different loglevels /priorities and allows the classification of messages according to their severity. (Different priorities are defined in file linux/kernel.h or linux/kern_levels.h, in form of macros like KERN_EMERG, KERN_ALERT, KERN_CRIT etc..)So, you may need to check the default logging levels in system by using cat /proc/sys/kernel/printk and change it as per your requirement. Also, check whether the logging daemons are up and incase you want to debug kernel, ensure that the kernel is compiled with CONFIG_DEBUG_INFO.
The method to use GDB to find the location where the kernel panicked or oopsed in ubuntu is in the link https://wiki.ubuntu.com/Kernel/KernelDebuggingTricks which can be one of the method that can be used by you for debugging kernel oops.
there won't be a core file.
You should follow the stack trace in kernel messages. type dmesg to see it.

Core dump is created, but not written to a file?

I'm trying to get a core dump of a proprietary application running on an embedded linux system, for which I wrote some plugins.
What I did was:
ulimit -c unlimited
echo "/tmp/cores/core.%e.%p.%h.%t" > /proc/sys/kernel/core_pattern
kill -3 <PID>`
However, no core dump is created. '/tmp/cores' exists and is writable for everyone, and the disk has enough space available. When I try the same thing with sleep 100 & as an example process and then kill it, the core dump is created.
I tried the example for the pipe syntax from the core manpage, which writes some parameters and the size of the core dump into a file called core.info. This file IS created, and the size is greater than 0. So if the core dump is created, why isn't it written to /tmp/cores? To be sure, I also searched for core* on the file system - it's not there. dmesg doesn't show any errors (but it does if I pipe the core dump to an invalid program).
Some more info: The system is probably based on Debian, but I'm not quite sure. GDB is not available, as well as many other tools - there is only busybox for basic stuff.
The process I'm trying to debug is automatically restarted soon after being killed.
So, I guess one solution would be to modify the example program in order to write the dump to a file instead of just counting bytes. But why doesn't it work just normally if there obviously is some data?
If your proprietary application calls setrlimit(2) with RLIMIT_CORE set to 0, or if it is setuid, no core dump happens. See core(5). Perhaps use strace(1) to find out. And you could install  gdb (perhaps by [cross-] compiling it). See also gcore(1).
Also, check (and maybe set) the limit in the invoking shell. With bash(1) use ulimit builtin. Otherwise, cat /proc/self/limits should display the limits. If you don't have bash you could code a small wrapper in C calling setrlimit then execve ...

How to limit the size of core dump file when generating it using GDB

I am running an embedded application on ARM9 board, where total flash size is 180MB only. I am able to run gdb, but when I do
(gdb) generate-core-dump
I get an error
warning: Memory read failed for corefile section, 1048576 bytes at 0x4156c000.
warning: Memory read failed for corefile section, 1048576 bytes at 0x50c00000.
Saved corefile core.5546
The program is running. Quit anyway (and detach it)? (y or n) [answered Y; input not from terminal]
Tamper Detected
**********OUTSIDE ifelse 0*********
length validation is failed
I also set ulimit -c 50000 but still the core dump exceeds this limit. When I do ls -l to check file size it is over 300 MB. In this case how should I limit the size of core dump?
GDB does not respect 'ulimit -c', only the kernel does.
It's not clear whether you run GDB on target board, or on a development host (and using gdbserver on target). You probably should use the latter, which will allow you to collect full core dump.
Truncated core dumps are a pain anyway, as often they will not contain exactly the info you need to debug the problem.
in your shell rc-file:
limit coredumpsize 50000 # or whatever limit size you like
that should set the limit for everything, including GDB
Note:
If you set it to 0 , you can make sure your home directory is not cluttered with core dump files.
When did you use ulimit -c ? It must be used before starting the program for which you're generating a core dump, and inside the same session.

How to set core dump naming scheme without su/sudo?

I am developing a MPI program on a Linux machine where I do not have sudo/su access. As my program currently segfaults, I would like to examine the core dumps via gdb. Unfortunately, as the program is multi-threaded, all the threads write to one core dump. So I would like to be able to append the PID to each separate core dump for every process.
I know there is a way to do it via /proc/sys/kernel/core_pattern, however I do not have access to write to this.
Thanks for any help.
It can be a pain to debug MPI apps on systems that are configured this way when you do not have root access. One option for working around this is to use Valgrind to get stack traces for your segfault(s). This will only be useful provided that your application will fail in a reasonable period of time when slowed down via Valgrind, and that it still segfaults at all in this case.
I usually run MPI apps under Valgrind like this:
% mpiexec -n 5 valgrind -q /path/to/my_app
That will send all of the Valgrind output to standard error. But if I want the output separated into different files, then you can get a bit fancier:
% mpiexec -n 5 valgrind -q --log-file='vg_out.%q{PMI_RANK}' /path/to/my_app
That's the setup for MPICH2. I think that for Open MPI you'll need to replace PMI_RANK with OMPI_MCA_ns_nds_vpid, but if that doesn't work for you then you'll need to check with the Open MPI developers on their discussion list. In either case, this will yield N files, where N is the size of MPI_COMM_WORLD, each named vg_out.0, vg_out.1, ..., to vg_out.$(($N-1)), each corresponding to a rank in MPI_COMM_WORLD.

How do I enable core dumps for daemon processes on montavista linux?

I am not sure if stackoverflow is the correct place for this, but since this is for embedded development, and I need core dumps, which are also for development, I figured that this was the best place to ask.
I am trying to enable global core dumps in such a way that every time a program crashes in a way which produces a core, it gets written to /foo/bar/core. Every time a program crashes, it overwrites the old core file. Currently I have tried the following:
Adding this to limits.conf
#<domain> <type> <item> <value>
* soft core unlimited
root soft core unlimited
# End of file
Adding this to sysctl.conf:
# Core Files
kernel.core_pattern=/mnt/ffs/core
kernel.core_uses_pid=0
This did not work. If I boot the system, do a sysctl -p, ulimit -c unlimited and then restart the processes by hand (without the init script), I get a core file in /foo/bar, but it has the PID appended. Any help would be greatly appreciated.
I set the core pattern to not include any process dependent information, yet the kernel still wanted to append the PID, so I ended up removing that bit of code from the kernel, and everything works fine now.

Resources