How to limit the size of core dump file when generating it using GDB - linux

I am running an embedded application on ARM9 board, where total flash size is 180MB only. I am able to run gdb, but when I do
(gdb) generate-core-dump
I get an error
warning: Memory read failed for corefile section, 1048576 bytes at 0x4156c000.
warning: Memory read failed for corefile section, 1048576 bytes at 0x50c00000.
Saved corefile core.5546
The program is running. Quit anyway (and detach it)? (y or n) [answered Y; input not from terminal]
Tamper Detected
**********OUTSIDE ifelse 0*********
length validation is failed
I also set ulimit -c 50000 but still the core dump exceeds this limit. When I do ls -l to check file size it is over 300 MB. In this case how should I limit the size of core dump?

GDB does not respect 'ulimit -c', only the kernel does.
It's not clear whether you run GDB on target board, or on a development host (and using gdbserver on target). You probably should use the latter, which will allow you to collect full core dump.
Truncated core dumps are a pain anyway, as often they will not contain exactly the info you need to debug the problem.

in your shell rc-file:
limit coredumpsize 50000 # or whatever limit size you like
that should set the limit for everything, including GDB
Note:
If you set it to 0 , you can make sure your home directory is not cluttered with core dump files.

When did you use ulimit -c ? It must be used before starting the program for which you're generating a core dump, and inside the same session.

Related

SYSTEM ERROR: I/O error 0 in writeto, ret 2048, file 56(/mfgtmp/tmp/srtE5yybD), addr 77010944. (290) - PROGRESS 4GL

I am getting below error suddenly when my progress program was executed and running for more than 80 minutes. I think this is OS error and error 0 says its for out of disk space. I checked the disk space as it shows 14 GB available but I am not sure why I am getting this error.
Is it because of on a write out of disk space(exceeding 14 GB) and stopped ? so that available 14 GB kept same as it is?
SYSTEM ERROR: I/O error 0 in writeto, ret 2048, file 56(/mfgtmp/tmp/srtE5yybD), addr 77010944. (290)
By default temp files are created "unlinked". Because of this the space they were using is automatically reclaimed by the OS if the session crashes so you will often have a situation where your temp file ran out of space, the session crashed, and then when you investigate there is plenty of free space.
You can change the default behavior by using the -t (lower case) startup parameter. This will result in the files not being removed if a session crashes - so the space will not be returned to the OS. You will have to manually delete "stale" files if you enable -t.
On UNIX -t will also make the files visible in the -T (upper case) directory so that you can see their growth in real time.
On Windows the files are always visible and the current length is not consistently reported by system tools.
If your temp files are being written to a different filesystem than your working directory (the -T startup parameter is where temp files go) then you should have a "protrace.pid" file corresponding to the crashed session's process id and the timestamp of the crash. This will then lead you to the 4gl code that was creating the very large srt file.
14GB is far beyond "reasonable" so you really should look at that code and see if there is a better way to do whatever it is doing.
There are a number of k-base articles on that issue, for instance: https://knowledgebase.progress.com/articles/Knowledge/000027351
When you check disk space, please make sure you're checking the correct file system (/mfgtmp in this case).
The error messages references an srt file - so you might want to try to use srt file less heavy, see this article for some initial help: https://knowledgebase.progress.com/articles/Knowledge/P95930
Or: https://knowledgebase.progress.com/articles/Knowledge/P84475

Core dump is created, but not written to a file?

I'm trying to get a core dump of a proprietary application running on an embedded linux system, for which I wrote some plugins.
What I did was:
ulimit -c unlimited
echo "/tmp/cores/core.%e.%p.%h.%t" > /proc/sys/kernel/core_pattern
kill -3 <PID>`
However, no core dump is created. '/tmp/cores' exists and is writable for everyone, and the disk has enough space available. When I try the same thing with sleep 100 & as an example process and then kill it, the core dump is created.
I tried the example for the pipe syntax from the core manpage, which writes some parameters and the size of the core dump into a file called core.info. This file IS created, and the size is greater than 0. So if the core dump is created, why isn't it written to /tmp/cores? To be sure, I also searched for core* on the file system - it's not there. dmesg doesn't show any errors (but it does if I pipe the core dump to an invalid program).
Some more info: The system is probably based on Debian, but I'm not quite sure. GDB is not available, as well as many other tools - there is only busybox for basic stuff.
The process I'm trying to debug is automatically restarted soon after being killed.
So, I guess one solution would be to modify the example program in order to write the dump to a file instead of just counting bytes. But why doesn't it work just normally if there obviously is some data?
If your proprietary application calls setrlimit(2) with RLIMIT_CORE set to 0, or if it is setuid, no core dump happens. See core(5). Perhaps use strace(1) to find out. And you could install  gdb (perhaps by [cross-] compiling it). See also gcore(1).
Also, check (and maybe set) the limit in the invoking shell. With bash(1) use ulimit builtin. Otherwise, cat /proc/self/limits should display the limits. If you don't have bash you could code a small wrapper in C calling setrlimit then execve ...

How do you read a certain size of memory from an arbitrary address and write it's content to a file?

OS : linux.
I am looking for tools or tips to write code (if and only if necessary) to write the contents of an address to a file for further investigation.
Thanks for any help.
Core dump is a full snapshot of the process memory.
If you have gcore available it will generate you a core dump of a running process without terminating it. Else you may use kill -ABRT to kill the process and generate core dump.
Make sure ulimit -c is set to unlimited (or set it with ulimit -c unlimited).
If you really want only a small segment dumped have a look at this section of GDB manual.

Generating core dumps

From times to times my Go program crashes.
I tried a few things in order to get core dumps generated for this program:
defining ulimit on the system, I tried both ulimit -c unlimited and ulimit -c 10000 just in case. After launching my panicking program, I get no core dump.
I also added recover() support in my program and added code to log to syslog in case of panic but I get nothing in syslog.
I am running out of ideas right now.
I must have overlooked something but I do not find what, any help would be appreciated.
Thanks ! :)
Note that a core dump is generated by the OS when a condition from a certain set is met. These conditions are pretty low-level — like trying to access unmapped memory or trying to execute an opcode the CPU does not know etc. Under a POSIX operating system such as Linux when a process does one of these things, an appropriate signal is sent to it, and some of them, if not handled by the process, have a default action of generating a core dump, which is done by the OS if not prohibited by setting a certain limit.
Now observe that this machinery treats a process on the lowest possible level (machine code), but the binaries a Go compiler produces are more higher-level that those a C compiler (or assembler) produces, and this means certain errors in a process produced by a Go compiler are handled by the Go runtime rather than the OS. For instance, a typical NULL pointer dereference in a process produced by a C compiler usually results in sending the process the SIGSEGV signal which is then typically results in an attempt to dump the process' core and terminate it. In contrast, when this happens in a process compiled by a Go compiler, the Go runtime kicks in and panics, producing a nice stack trace for debugging purposes.
With these facts in mind, I would try to do this:
Wrap your program in a shell script which first relaxes the limit for core dumps (but see below) and then runs your program with its standard error stream redirected to a file (or piped to the logger binary etc).
The limits a user can tweak have a hierarchy: there are soft and hard limits — see this and this for an explanation. So try checking your system does not have 0 for the core dump size set as a hard limit as this would explain why your attempt to raise this limit has no effect.
At least on my Debian systems, when a program dies due to SIGSEGV, this fact is logged by the kernel and is visible in the syslog log files, so try grepping them for hints.
First, please make sure all errors are handled.
For core dump, you can refer generate a core dump in linux
You can use supervisor to reboot the program when it crashes.

How to set core dump naming scheme without su/sudo?

I am developing a MPI program on a Linux machine where I do not have sudo/su access. As my program currently segfaults, I would like to examine the core dumps via gdb. Unfortunately, as the program is multi-threaded, all the threads write to one core dump. So I would like to be able to append the PID to each separate core dump for every process.
I know there is a way to do it via /proc/sys/kernel/core_pattern, however I do not have access to write to this.
Thanks for any help.
It can be a pain to debug MPI apps on systems that are configured this way when you do not have root access. One option for working around this is to use Valgrind to get stack traces for your segfault(s). This will only be useful provided that your application will fail in a reasonable period of time when slowed down via Valgrind, and that it still segfaults at all in this case.
I usually run MPI apps under Valgrind like this:
% mpiexec -n 5 valgrind -q /path/to/my_app
That will send all of the Valgrind output to standard error. But if I want the output separated into different files, then you can get a bit fancier:
% mpiexec -n 5 valgrind -q --log-file='vg_out.%q{PMI_RANK}' /path/to/my_app
That's the setup for MPICH2. I think that for Open MPI you'll need to replace PMI_RANK with OMPI_MCA_ns_nds_vpid, but if that doesn't work for you then you'll need to check with the Open MPI developers on their discussion list. In either case, this will yield N files, where N is the size of MPI_COMM_WORLD, each named vg_out.0, vg_out.1, ..., to vg_out.$(($N-1)), each corresponding to a rank in MPI_COMM_WORLD.

Resources