Cannot locate core file with abrt-hook-cpp installed - linux

I've been led to understand that if abrt-ccpp.service is installed on a Linux PC, it supersedes/overwrites (I've read both, not sure which is true) the file /proc/sys/kernel/core_pattern, which otherwise specifies the location and filename pattern of core files.
Question:
When I execute systemctl, why does abrt-ccpp.service report exited under the SUB column? I don't understand the combination of active and exited: is the service "alive"/active/running or not?
> systemctl
UNIT LOAD ACTIVE SUB
abrt-ccpp.service loaded active exited ...
Question:
Where are core files generated? I wrote this program to generate a SIGSEGV:
#include <iostream>
int main(int argc, char* argv[], char* envz[])
{
int* pInt = NULL;
std::cout << *pInt << std::endl;
return 0;
}
Compilation and execution as follows:
> g++ main.cpp
> ./a.out
Segmentation fault (core dumped)
But I cannot locate where the core file is generated.
What I have tried:
Looked in the same directory as my main.cpp. Core file is not there.
Looked in /var/tmp/abrt/ because of the following comment in /etc/abrt/abrt.conf. Core file is not there.
...
# Specify where you want to store coredumps and all files which are needed for
# reporting. (default:/var/tmp/abrt)
#
# Changing dump location could cause problems with SELinux. See man_abrt_selinux(8).
#
#DumpLocation = /var/tmp/abrt
...
Looked in /var/spool/abrt/ because of a comment at this link. Core file is not there.
Edited /etc/abrt/abrt.conf and uncommented and set DumpLocation = ~/foo which is an existing directory. Followed this by restarting abrt-hook-ccpp (sudo service abrt-ccpp restart) and rerunning a.out. Core file was not generated in ~/foo/
Verified that ulimit -c reports unlimited.
I am out of ideas of what else to try and where else to look.
In case helpful, this is the content of my /proc/sys/kernel/core_pattern:
> cat /proc/sys/kernel/core_pattern
|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e
Can someone help explain how the abrt-hook-ccpp service works and where it generates core files? Thank you.

I'd like to credit https://unix.stackexchange.com/users/119298/meuh who answered this at https://unix.stackexchange.com/questions/343240/cannot-locate-core-file-with-abrt-hook-cpp-installed.
The answer was to add this line in file /etc/abrt/abrt-action-save-package-data.conf
ProcessUnpackaged = yes
The comment from #daniel-kamil-kozar was also a viable workaround.

Related

no core dump in /var/crash

I am trying to understand a bit how the core dump work.
I use the test.c file to generate a core dump :
#include <stdio.h>
void foo()
{
int *ptr = 0;
*ptr = 7;
}
int main()
{
foo();
return 0;
}
I compile with
gcc test.c -o test
Which gives me the following message when I run ./test
Segmentation fault (core dumped)
My file
/proc/sys/kernel/core_pattern
contains :
|/usr/share/apport/apport %p %s %c %d %P
I checked that I have the permissions to write to the directory
/var/crash/
but after the core dump there is nothing in this folder (/var/crash/).
I am using Linux release 17.04.
Do you know what can go wrong here?
edit
I forgot to mention that I set the limits with :
ulimit -c unlimited
so the output of
ulimit -c
reads :
unlimited
I even tried to do what they say here in section How to enable apport, so I added a hash sign in front of
'problem_types': ['Bug', 'Package']
But with all of this the core dump cannot be found in /var/cash
This link contains a checklist for why coredump is not generated. Adding the list below in case link becomes inaccessible in future.
The core would have been larger than the current limit.
You don't have the necessary permissions to dump core (directory and file). Notice that core dumps are placed in the dumping process' current directory which could be different from the parent process.
Verify that the file system is writeable and have sufficient free space.
If a sub directory named core exist in the working directory no core will be dumped.
If a file named core already exist but has multiple hard links the kernel will not dump core.
Verify the permissions on the executable, if the executable has the suid or sgid bit enabled core dumps will by default be disabled. The same will be the case if you have execute permissions but no read permissions on the file.
Verify that the process has not changed working directory, core size limit, or dumpable flag.
Some kernel versions cannot dump processes with shared address space (AKA threads). Newer kernel versions can dump such processes but will append the pid to the file name.
The executable could be in a non-standard format not supporting core dumps. Each executable format must implement a core dump routine.
The segmentation fault could actually be a kernel Oops, check the system logs for any Oops messages.
The application called exit() instead of using the core dump handler.
I was also struggling to get coredumps and I had the same problem with ulimit. The session specific setting suggested by Niranjan also didn't work for me.
Finally I found the solution at https://serverfault.com/questions/216656/how-to-set-systemwide-ulimit-on-ubuntu
in /etc/security/limits.conf add:
root - core unlimited
* - core unlimited
And log out / log in.
Then
ulimit -c
on the terminal should return "unlimited" and core dumps are generated.
What filesize limit have you set for coredumps in your machine?
You can check it using
$ ulimit -c
If it is set to 0, then no coredumps will be generated - This is the default setting in most distros.
You can enable coredumps by setting it to 'unlimited' or using a specific filesize limit.
$ ulimit -c unlimited

Perf: kernel module symbols not showing up in profiling

On loading and running a kernel module and then profiling through perf.
$perf record -a -g --call-graph dwarf sleep 30'
$perf report
my kernel module's symbols are not present in the perf's report.
Although the symbols are present in /proc/kallsyms.
Also the module is not present in perf buildid-list
As this answer says to make the module a kernel module, I tried but didn't help.
What are the possible reasons that could lead to this?
The message Failed to open [thrUserCtrl], continuing without symbols sounds like perf was unable to find your module. Try installing it into
/lib/modules/`uname -r`/extra
directory as said in https://wiki.centos.org/HowTos/BuildingKernelModules:
6. In this example, the file cifs.ko has just been created.
As root, copy the .ko file to the /lib/modules/<kernel-version>/extra/
directory.
[root#host linux-2.6.18.i686]# cp fs/cifs/cifs.ko /lib/modules/`uname -r`/extra
(don't forget depmod -a command after changing files in /lib/modules)
This message is generated in map__load: http://elixir.free-electrons.com/linux/v4.11/source/tools/perf/util/map.c#L284
int map__load(struct map *map)
{
const char *name = map->dso->long_name;
int nr;
...
nr = dso__load(map->dso, map);
if (nr < 0) {
if (map->dso->has_build_id) {
...
} else
pr_warning("Failed to open %s", name);
pr_warning(", continuing without symbols\n");
return -1;
when dso__load function returns error.

How to get cwd for relative paths?

How can I get current working directory in strace output, for system calls that are being called with relative paths? I'm trying to debug complex application that spawns multiple processes and fails to open particular file.
stat("some_file", 0x7fff6b313df0) = -1 ENOENT (No such file or directory)
Since some_file exists I believe that its located in the wrong directory. I'd tried to trace chdir calls too, but since output is interleaved its hard to deduce working directory that way. Is there a better way?
You can use the -y option and it will print the full path. Another useful flag in this situation is -P which only traces syscalls relating to a specific path, e.g.
strace -y -P "some_file"
Unfortunately -y will only print the path of file descriptors, and since your call doesn't load any it doesn't have one. A possible workaround is to interrupt the process when that syscall is run in a debugger, then you can get its working directory by inspecting /proc/<PID>/cwd. Something like this (totally untested!)
gdb --args strace -P "some_file" -e inject=open:signal=SIGSEGV
Or you may be able to use a conditional breakpoint. Something like this should work, but I had difficulty with getting GDB to follow child processes after a fork. If you only have one process it should be fine I think.
gdb your_program
break open if $_streq((char*)$rdi, "some_file")
run
print getpid()
It is quite easy, use the function char *realpath(const char *path, char *resolved_path) for the current directory.
This is my example:
#include <limits.h>
#include <stdio.h>
#include <stdlib.h>
int main(){
char *abs;
abs = realpath(".", NULL);
printf("%s\n", abs);
return 0;
}
output
root#ubuntu1504:~/patches_power_spec# pwd
/root/patches_power_spec
root#ubuntu1504:~/patches_power_spec# ./a.out
/root/patches_power_spec

How to generate core dump file in Linux?

I am trying to generate the core dump file using the below program in Linux.
#include <stdio.h>
#include<iostream>
using namespace std;
int main()
{
char *temp ="ABCDE";
int i =0;
temp[3] ='F';
for (i =0; i <5; i++)
printf("% Value is %c\n", temp[i]);
cout<<"Done"<<endl;
return 0;
}
I saved the above source code as sample.cpp and build the file using the below command.
g++ sample.cpp -g -o test
Run the output file "test" which produced the error "Segmentation fault". But it didn't generate the core dump file.
./test
I refereed this . Thanks for your help.
The generation of core dump files it is not always enabled. Try with the ulimit command.
Some systems are configured not to write core files by default, since the files can be large and rapidly fill up the available disk space on a system. In the GNU Bash shell the command ulimit -c controls the maximum size of core files. If the size limit is zero, no core files are produced. The current size limit can be shown by typing the following command:
$ ulimit -c
0
If the result is zero, as shown above, then it can be increased with the following command to allow core files of any size to be written:(17)
$ ulimit -c unlimited

Compressing the core files during core generation

Is there way to compress the core files during core dump generation?
If the storage space is limited in the system, is there a way of conserving it in case of need for core dump generation with immediate compression?
Ideally the method would work on older versions of linux such as 2.6.x.
The Linux kernel /proc/sys/kernel/core_pattern file will do what you want: http://www.mjmwired.net/kernel/Documentation/sysctl/kernel.txt#191
Set the filename to something like |/bin/gzip -1 > /var/crash/core-%t-%p-%u.gz and your core files should be saved compressed for you.
For an embedded Linux systems, following script change perfectly works to generate compressed core files in 2 steps
step 1: create a script
touch /bin/gen_compress_core.sh
chmod +x /bin/gen_compress_core.sh
cat > /bin/gen_compress_core.sh #!/bin/sh exec /bin/gzip -f - >"/var/core/core-$1.$2.gz"
ctrl +d
step 2: update the core pattern file
cat > /proc/sys/kernel/core_pattern |/bin/gen_compress_core.sh %e %p ctrl+d
As suggested by other answer, the Linux kernel /proc/sys/kernel/core_pattern file is good place to start: http://www.mjmwired.net/kernel/Documentation/sysctl/kernel.txt#141
As documentation says you can specify the special character "|" which will tell kernel to output the file to script. As suggested you could use |/bin/gzip -1 > /var/crash/core-%t-%p-%u.gz as name, however it doesn't seem to work for me. I expect that the reason is that on my system kernel doesn't treat the > character as a output, rather it probably passes it as a parameter to gzip.
In order to avoid this problem, like other suggested you can create your file in some location I am using /home//crash/core.sh, create it using the following command, replacing with your user. Alternatively you can also obviously change the entire path.
echo -e '#!/bin/bash\nexec /bin/gzip -f - >"/home/<username>/crashes/core-$1-$2-$3-$4-$5.gz"' > ~/crashes/core.sh
Now this script will take 5 input parameters and concatenate them and add to core-path. The full paths must be specified in the ~/crashes/core.sh. Also the location of this script can be specified. Now lets tell kernel to use tour executable with parameters when generating file:
sudo sysctl -w kernel.core_pattern="|/home/<username>/crashes/core.sh %e %p %h %t"
Again should be replaced (or entire path to match location and name of core.sh script). Next step is to crash some program, lets create example crashing cpp file:
int main (){
int * a = nullptr;
int b = *a;
}
After compiling and running there are 2 options, either we will see:
Segmentation fault (core dumped)
Or
Segmentation fault
In case we see the latter, there are few possible reasons.
ulimit is not set, ulimit -c should specify what is limit for cores
apport or your distro core dump collector is not running, this should be investigated further
there is an error in script we wrote, I suggest than checking some basic dump path to check if the other things aren't reason the below should create /tmp/core.dump:
sudo sysctl -w kernel.core_pattern="/tmp/core.dump"
I know there is already an answer for this question however it wasn't obvious for me why it isn't working "out of the box" so I wanted to summarize my findings, hope it helps someone.

Resources