no core dump in /var/crash - linux

I am trying to understand a bit how the core dump work.
I use the test.c file to generate a core dump :
#include <stdio.h>
void foo()
{
int *ptr = 0;
*ptr = 7;
}
int main()
{
foo();
return 0;
}
I compile with
gcc test.c -o test
Which gives me the following message when I run ./test
Segmentation fault (core dumped)
My file
/proc/sys/kernel/core_pattern
contains :
|/usr/share/apport/apport %p %s %c %d %P
I checked that I have the permissions to write to the directory
/var/crash/
but after the core dump there is nothing in this folder (/var/crash/).
I am using Linux release 17.04.
Do you know what can go wrong here?
edit
I forgot to mention that I set the limits with :
ulimit -c unlimited
so the output of
ulimit -c
reads :
unlimited
I even tried to do what they say here in section How to enable apport, so I added a hash sign in front of
'problem_types': ['Bug', 'Package']
But with all of this the core dump cannot be found in /var/cash

This link contains a checklist for why coredump is not generated. Adding the list below in case link becomes inaccessible in future.
The core would have been larger than the current limit.
You don't have the necessary permissions to dump core (directory and file). Notice that core dumps are placed in the dumping process' current directory which could be different from the parent process.
Verify that the file system is writeable and have sufficient free space.
If a sub directory named core exist in the working directory no core will be dumped.
If a file named core already exist but has multiple hard links the kernel will not dump core.
Verify the permissions on the executable, if the executable has the suid or sgid bit enabled core dumps will by default be disabled. The same will be the case if you have execute permissions but no read permissions on the file.
Verify that the process has not changed working directory, core size limit, or dumpable flag.
Some kernel versions cannot dump processes with shared address space (AKA threads). Newer kernel versions can dump such processes but will append the pid to the file name.
The executable could be in a non-standard format not supporting core dumps. Each executable format must implement a core dump routine.
The segmentation fault could actually be a kernel Oops, check the system logs for any Oops messages.
The application called exit() instead of using the core dump handler.

I was also struggling to get coredumps and I had the same problem with ulimit. The session specific setting suggested by Niranjan also didn't work for me.
Finally I found the solution at https://serverfault.com/questions/216656/how-to-set-systemwide-ulimit-on-ubuntu
in /etc/security/limits.conf add:
root - core unlimited
* - core unlimited
And log out / log in.
Then
ulimit -c
on the terminal should return "unlimited" and core dumps are generated.

What filesize limit have you set for coredumps in your machine?
You can check it using
$ ulimit -c
If it is set to 0, then no coredumps will be generated - This is the default setting in most distros.
You can enable coredumps by setting it to 'unlimited' or using a specific filesize limit.
$ ulimit -c unlimited

Related

Perf: kernel module symbols not showing up in profiling

On loading and running a kernel module and then profiling through perf.
$perf record -a -g --call-graph dwarf sleep 30'
$perf report
my kernel module's symbols are not present in the perf's report.
Although the symbols are present in /proc/kallsyms.
Also the module is not present in perf buildid-list
As this answer says to make the module a kernel module, I tried but didn't help.
What are the possible reasons that could lead to this?
The message Failed to open [thrUserCtrl], continuing without symbols sounds like perf was unable to find your module. Try installing it into
/lib/modules/`uname -r`/extra
directory as said in https://wiki.centos.org/HowTos/BuildingKernelModules:
6. In this example, the file cifs.ko has just been created.
As root, copy the .ko file to the /lib/modules/<kernel-version>/extra/
directory.
[root#host linux-2.6.18.i686]# cp fs/cifs/cifs.ko /lib/modules/`uname -r`/extra
(don't forget depmod -a command after changing files in /lib/modules)
This message is generated in map__load: http://elixir.free-electrons.com/linux/v4.11/source/tools/perf/util/map.c#L284
int map__load(struct map *map)
{
const char *name = map->dso->long_name;
int nr;
...
nr = dso__load(map->dso, map);
if (nr < 0) {
if (map->dso->has_build_id) {
...
} else
pr_warning("Failed to open %s", name);
pr_warning(", continuing without symbols\n");
return -1;
when dso__load function returns error.

Cannot locate core file with abrt-hook-cpp installed

I've been led to understand that if abrt-ccpp.service is installed on a Linux PC, it supersedes/overwrites (I've read both, not sure which is true) the file /proc/sys/kernel/core_pattern, which otherwise specifies the location and filename pattern of core files.
Question:
When I execute systemctl, why does abrt-ccpp.service report exited under the SUB column? I don't understand the combination of active and exited: is the service "alive"/active/running or not?
> systemctl
UNIT LOAD ACTIVE SUB
abrt-ccpp.service loaded active exited ...
Question:
Where are core files generated? I wrote this program to generate a SIGSEGV:
#include <iostream>
int main(int argc, char* argv[], char* envz[])
{
int* pInt = NULL;
std::cout << *pInt << std::endl;
return 0;
}
Compilation and execution as follows:
> g++ main.cpp
> ./a.out
Segmentation fault (core dumped)
But I cannot locate where the core file is generated.
What I have tried:
Looked in the same directory as my main.cpp. Core file is not there.
Looked in /var/tmp/abrt/ because of the following comment in /etc/abrt/abrt.conf. Core file is not there.
...
# Specify where you want to store coredumps and all files which are needed for
# reporting. (default:/var/tmp/abrt)
#
# Changing dump location could cause problems with SELinux. See man_abrt_selinux(8).
#
#DumpLocation = /var/tmp/abrt
...
Looked in /var/spool/abrt/ because of a comment at this link. Core file is not there.
Edited /etc/abrt/abrt.conf and uncommented and set DumpLocation = ~/foo which is an existing directory. Followed this by restarting abrt-hook-ccpp (sudo service abrt-ccpp restart) and rerunning a.out. Core file was not generated in ~/foo/
Verified that ulimit -c reports unlimited.
I am out of ideas of what else to try and where else to look.
In case helpful, this is the content of my /proc/sys/kernel/core_pattern:
> cat /proc/sys/kernel/core_pattern
|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e
Can someone help explain how the abrt-hook-ccpp service works and where it generates core files? Thank you.
I'd like to credit https://unix.stackexchange.com/users/119298/meuh who answered this at https://unix.stackexchange.com/questions/343240/cannot-locate-core-file-with-abrt-hook-cpp-installed.
The answer was to add this line in file /etc/abrt/abrt-action-save-package-data.conf
ProcessUnpackaged = yes
The comment from #daniel-kamil-kozar was also a viable workaround.

How to generate core dump file in Linux?

I am trying to generate the core dump file using the below program in Linux.
#include <stdio.h>
#include<iostream>
using namespace std;
int main()
{
char *temp ="ABCDE";
int i =0;
temp[3] ='F';
for (i =0; i <5; i++)
printf("% Value is %c\n", temp[i]);
cout<<"Done"<<endl;
return 0;
}
I saved the above source code as sample.cpp and build the file using the below command.
g++ sample.cpp -g -o test
Run the output file "test" which produced the error "Segmentation fault". But it didn't generate the core dump file.
./test
I refereed this . Thanks for your help.
The generation of core dump files it is not always enabled. Try with the ulimit command.
Some systems are configured not to write core files by default, since the files can be large and rapidly fill up the available disk space on a system. In the GNU Bash shell the command ulimit -c controls the maximum size of core files. If the size limit is zero, no core files are produced. The current size limit can be shown by typing the following command:
$ ulimit -c
0
If the result is zero, as shown above, then it can be increased with the following command to allow core files of any size to be written:(17)
$ ulimit -c unlimited

How can I get perf to find symbols in my program

When using perf report, I don't see any symbols for my program, instead I get output like this:
$ perf record /path/to/racket ints.rkt 10000
$ perf report --stdio
# Overhead Command Shared Object Symbol
# ........ ........ ................. ......
#
70.06% ints.rkt [unknown] [.] 0x5f99b8
26.28% ints.rkt [kernel.kallsyms] [k] 0xffffffff8103d0ca
3.66% ints.rkt perf-32046.map [.] 0x7f1d9be46650
Which is fairly uninformative.
The relevant program is built with debugging symbols, and the sysprof tool shows the appropriate symbols, as does Zoom, which I think is using perf under the hood.
Note that this is on x86-64, so the binary is compiled with -fomit-frame-pointer, but that's the case when running under the other tools as well.
This post is already over a year old, but since it came out at the top of my Google search results when I had the same problem, I thought I'd answer it here. After some more searching around, I found the answer given in this related StackOverflow question very helpful. On my Ubuntu Raring system, I then ended up doing the following:
Compile my C++ sources with -g (fairly obvious, you need debug symbols)
Run perf as
record -g dwarf -F 97 /path/to/my/program
This way perf is able to handle the DWARF 2 debug format, which is the standard format gcc uses on Linux. The -F 97 parameter reduces the sampling rate to 97 Hz. The default sampling rate was apparently too large for my system and resulted in messages like this:
Warning:
Processed 172390 events and lost 126 chunks!
Check IO/CPU overload!
and the perf report call afterwards would fail with a segmentation fault. With the reduced sampling rate everything worked out fine.
Once the perf.data file has been generated without any errors in the previous step, you can run perf report etc. I personally like the FlameGraph tools to generate SVG visualizations.
Other people reported that running
echo 0 > /proc/sys/kernel/kptr_restrict
as root can help as well, if kernel symbols are required.
In my case the solution was to delete the elf files which contained cached symbols from previous builds and were messing things up.
They are in ~/.debug/ folder
You can always use the '$ nm ' command.
here is some sample output:
Ethans-MacBook-Pro:~ phyrrus9$ nm a.out
0000000100000000 T __mh_execute_header
0000000100000f30 T _main
U _printf
0000000100000f00 T _sigint
U _signal
U dyld_stub_binder
I had this problem too, I couldn't see any userspace symbol, but I saw some kernel symbols. I thought this was a symbol loading issue. After tried all the possible solutions I could find, I still couldn't get it work.
Then I faintly remember that
ulimit -u unlimited
is needed. I tried and it magically worked.
I found from this wiki that this command is needed when you use too many file descriptors.
https://perf.wiki.kernel.org/index.php/Tutorial#Troubleshooting_and_Tips
my final command was
perf record -F 999 -g ./my_program
didn't need --call-graph
Make sure that you compile the program using -g option along with gcc(cc) so that debugging information is produced in the operating system's native format.
Try to do the following and check if there are debug symbols present in the symbol table.
$objdump -t your-elf
$readelf -a your-elf
$nm -a your-elf
How about your dev host machine? Is it also running the x86_64 OS?
If not, please make sure the perf is cross-compiled, because the perf depends on the objdump and other tools in toolchain.
I got the same problem with perf after overriding the name of my program via prctl(PR_SET_NAME)
As I can see your case is pretty similar:
70.06% ints.rkt [unknown]
Command you have executed (racket) is different from the one perf have seen.
you can check the value of kptr_restrict by cat /proc/kallsyms. If the addresses of the symbols in the result are all 0x000000, you can fix it by command echo 0 > sys/kernel/kptr_restrict . After this , you may get a wanted result of the perf report

Compressing the core files during core generation

Is there way to compress the core files during core dump generation?
If the storage space is limited in the system, is there a way of conserving it in case of need for core dump generation with immediate compression?
Ideally the method would work on older versions of linux such as 2.6.x.
The Linux kernel /proc/sys/kernel/core_pattern file will do what you want: http://www.mjmwired.net/kernel/Documentation/sysctl/kernel.txt#191
Set the filename to something like |/bin/gzip -1 > /var/crash/core-%t-%p-%u.gz and your core files should be saved compressed for you.
For an embedded Linux systems, following script change perfectly works to generate compressed core files in 2 steps
step 1: create a script
touch /bin/gen_compress_core.sh
chmod +x /bin/gen_compress_core.sh
cat > /bin/gen_compress_core.sh #!/bin/sh exec /bin/gzip -f - >"/var/core/core-$1.$2.gz"
ctrl +d
step 2: update the core pattern file
cat > /proc/sys/kernel/core_pattern |/bin/gen_compress_core.sh %e %p ctrl+d
As suggested by other answer, the Linux kernel /proc/sys/kernel/core_pattern file is good place to start: http://www.mjmwired.net/kernel/Documentation/sysctl/kernel.txt#141
As documentation says you can specify the special character "|" which will tell kernel to output the file to script. As suggested you could use |/bin/gzip -1 > /var/crash/core-%t-%p-%u.gz as name, however it doesn't seem to work for me. I expect that the reason is that on my system kernel doesn't treat the > character as a output, rather it probably passes it as a parameter to gzip.
In order to avoid this problem, like other suggested you can create your file in some location I am using /home//crash/core.sh, create it using the following command, replacing with your user. Alternatively you can also obviously change the entire path.
echo -e '#!/bin/bash\nexec /bin/gzip -f - >"/home/<username>/crashes/core-$1-$2-$3-$4-$5.gz"' > ~/crashes/core.sh
Now this script will take 5 input parameters and concatenate them and add to core-path. The full paths must be specified in the ~/crashes/core.sh. Also the location of this script can be specified. Now lets tell kernel to use tour executable with parameters when generating file:
sudo sysctl -w kernel.core_pattern="|/home/<username>/crashes/core.sh %e %p %h %t"
Again should be replaced (or entire path to match location and name of core.sh script). Next step is to crash some program, lets create example crashing cpp file:
int main (){
int * a = nullptr;
int b = *a;
}
After compiling and running there are 2 options, either we will see:
Segmentation fault (core dumped)
Or
Segmentation fault
In case we see the latter, there are few possible reasons.
ulimit is not set, ulimit -c should specify what is limit for cores
apport or your distro core dump collector is not running, this should be investigated further
there is an error in script we wrote, I suggest than checking some basic dump path to check if the other things aren't reason the below should create /tmp/core.dump:
sudo sysctl -w kernel.core_pattern="/tmp/core.dump"
I know there is already an answer for this question however it wasn't obvious for me why it isn't working "out of the box" so I wanted to summarize my findings, hope it helps someone.

Resources