How to access Parallel port in Linux - linux

On my Linux machine (Debian Wheezy), I tried to access the parallel port by request_region() but it failed because the system had already loaded the kernel module parport...
So, I rmmod the modules lp, ppdev, parport_pc and parport. Then, I could successfully insert my module.
However, from the base address inb() returned 0xff, no matter what value was written.
Before rmmod those module from kernel, I could wrote and read this register. Then I blacklisted those module from being loaded at system start up, and I could read and write these registers and my module also worked. It seems that the clearup function of parport_pc did something that made the hardware unusable. (At least the status of the port is not the same as it was before the module loaded).
My question is why, and what should I do to recover the port instead of reload parport_pc ?

You can use C to write a small program that will read and write directly from/to the pins on the parallel port by way of the outb and inb functions. Then, you can simply call the C program from the command line of shelling from some other script. Usually, (by default) address 0x378 is the address of the parallel port LPT0 in memory, so you it's just a matter of using inb and outp to read/write to this address. For example:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <asm/io.h>
#define base 0x378 //LPT0
//to compile: gcc -O parport.c -o parport
//after compiling, set suid: chmod +s parport then, copy to /usr/sbin/
int main(void) {
if(ioperm(base,1,1))
fprintf(stderr, "Couldn't open parallel port"), exit(1);
outb(255,base); //set all pins hi
sleep(5);
outb(0,base); //set all pins lo
return 0;
}

Some driver mods have blocked your access to parallel port.
Edit the /etc/modprobe.d/blacklist.conf file and add the following lines, then reboot your linux.
blacklist ppdev
blacklist lp
blacklist parport_pc
blacklist parport
And if cups is installed, you should modify /etc/modules-load.d/cups-filters.conf:
#lp
#ppdev
#parport_pc
Here is some details:
https://stackoverflow.com/a/27423675/4350106

Related

Debug messages not being printed on the console

I am trying to enable printing the debug messages on the console.
#include <linux/kernel.h>
#include <linux/module.h>
MODULE_LICENSE("GPL");
static int test_hello_init(void)
{
printk(KERN_INFO"%s: In init\n", __func__);
return 0;
}
static void test_hello_exit(void)
{
printk(KERN_INFO"%s: In exit\n", __func__);
}
module_init(test_hello_init);
module_exit(test_hello_exit);
To get the Info messages on the console, i executed the following command: dmesg -n7
cat /proc/sys/kernel/printk
7 4 1 7
When I load the module using insmod, i don't get any message on the terminal, while it is available when I type dmesg. What mistake i am making here.
Messages from kernel are not printed on terminal (unless it's specified as console= in kernel cmdline). They are appended to kernel log, which exists in kernel. It's accessible to user space programs via device file /dev/kmsg. This file is read by dmesg command in order to print kernel log content on terminal.

Signal handling with qemu-user

On my machine I have an aarch64 binary, that is statically compiled. I run it using qemu-aarch64-static with the -g 6566 flag. In another terminal I start up gdb-multiarch and connect as target remote localhost:6566.
I expect the binary to raise a signal for which I have a handler defined in the binary. I set a breakpoint at the handler from inside gdb-multiarch after connecting to remote. However, when the signal arises, the breakpoint is not hit on gdb-multiarch. Instead, on the terminal that runs the binary, I get a message along the lines of :-
[1] + 8388 suspended (signal) qemu-aarch64-static -g 6566 ./testbinary
Why does this happen? How can I set a breakpoint on the handler and debug it? I've tried SIGCHLD and SIGFPE.
This works for me with a recent QEMU:
$ cat sig.c
#include <stdlib.h>
#include <signal.h>
#include <stdio.h>
void handler(int sig) {
printf("In signal handler, signal %d\n", sig);
return;
}
int main(void) {
printf("hello world\n");
signal(SIGUSR1, handler);
raise(SIGUSR1);
printf("done\n");
return 0;
}
$ aarch64-linux-gnu-gcc -g -Wall -o sig sig.c -static
$ qemu-aarch64 -g 6566 ./sig
and then in another window:
$ gdb-multiarch
GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.2) 7.7.1
[etc]
(gdb) set arch aarch64
The target architecture is assumed to be aarch64
(gdb) file /tmp/sigs/sig
Reading symbols from /tmp/sigs/sig...done.
(gdb) target remote :6566
Remote debugging using :6566
0x0000000000400c98 in _start ()
(gdb) break handler
Breakpoint 1 at 0x400e44: file sig.c, line 6.
(gdb) c
Continuing.
Program received signal SIGUSR1, User defined signal 1.
0x0000000000405c68 in raise ()
(gdb) c
Continuing.
Breakpoint 1, handler (sig=10) at sig.c:6
6 printf("In signal handler, signal %d\n", sig);
(gdb)
As you can see, gdb gets control both immediately the process receives the signal and then again when we hit the breakpoint for the handler function.
Incidentally, (integer) dividing by zero is not a reliable way to provoke a signal. This is undefined behaviour in C, and the implementation is free to do the most convenient thing. On x86 this typically results in a SIGFPE. On ARM you will typically find that the result is zero and execution will continue without a signal. (This is a manifestation of the different behaviour of the underlying hardware instructions for division between the two architectures.)
i was doing some R&D for your answer and find following answer
"Internally, bad memory accesses result in the Mach exception EXC_BAD_ACCESS being sent to the program. Normally, this is translated into a SIGBUS UNIX signal. However, gdb intercepts Mach exceptions directly, before the signal translation. The solution is to give gdb the command set dont-handle-bad-access 1 before running your program. Then the normal mechanism is used, and breakpoints inside your signal handler are honored."
The link is gdb: set a breakpoint for a SIGBUS handler
It perhaps help you by considering that qemu does not change the functionality of base operations

Raw capture capabilities (CAP_NET_RAW, CAP_NET_ADMIN) not working outside /usr/bin and friends for packet capture program using libpcap

TL;DR: Why are cap_net_raw, cap_net_admin capabilities only working in /usr/bin (or /usr/sbin), but not other places? Can this be configured someplace?
I'm having problems assigning capabilities to my C program utilizing libpcap in Ubuntu 14.04. Even after assigning capabilities using setcap(8) and checking it using getcap(8), I still get a permission error. It seems capabilities only work for executables in \usr\bin and friends.
My program test.c looks as follows:
#include <stdio.h>
#include <pcap.h>
int main(int argc, char **argv) {
if (argc != 2) {
printf("Specify interface \n");
return -1;
}
char errbuf[PCAP_ERRBUF_SIZE];
struct pcap* pcap = pcap_open_live(argv[1], BUFSIZ, 1, 0, errbuf);
if (pcap == NULL) {
printf("%s\n", errbuf);
return -1;
}
return 0;
}
and is compiled with
gcc test.c -lpcap
generating a.out executable. I set capabilities:
sudo setcap cap_net_raw,cap_net_admin=eip ./a.out
And check to see that it looks right:
getcap a.out
which gives me
a.out = cap_net_admin,cap_net_raw+eip
Running a.out gives me:
./a.out eth0
eth0: You don't have permission to capture on that device (socket: Operation not permitted)
Running with sudo works as expected (program prints nothing and exits).
Here's the interesting part: If I move a.out to /usr/bin (and reapply the capabilities), it works. Vice versa: taking the capability-enabled /usr/bin/dumpcap from wireshark (which works fine for users in the wireshark group) and moving it out of /usr/bin, say to my home dir, reapplying the same capabilities, it doesn't work. Moving it back, it works.
SO: Why are these capabilities only working in /usr/bin (and /usr/sbin), but not other places? Can this be configured someplace?
This might be because your home directory is mounted with nosuid, which seems to prevent capabilities working. Ubuntu encrypts the home directory, and mounts that with ecryptfs as nosuid.
Binaries with capabilities work for me in /usr/, and /home/, but not my home directory.
The only reference I could find to nosuid defeating capabilities is this link: http://www.gossamer-threads.com/lists/linux/kernel/1860853#1860853. I would love to find an authoritative source.

Is there a way to figure out what is using a Linux kernel module?

If I load a kernel module and list the loaded modules with lsmod, I can get the "use count" of the module (number of other modules with a reference to the module). Is there a way to figure out what is using a module, though?
The issue is that a module I am developing insists its use count is 1 and thus I cannot use rmmod to unload it, but its "by" column is empty. This means that every time I want to re-compile and re-load the module, I have to reboot the machine (or, at least, I can't figure out any other way to unload it).
Actually, there seems to be a way to list processes that claim a module/driver - however, I haven't seen it advertised (outside of Linux kernel documentation), so I'll jot down my notes here:
First of all, many thanks for #haggai_e's answer; the pointer to the functions try_module_get and try_module_put as those responsible for managing the use count (refcount) was the key that allowed me to track down the procedure.
Looking further for this online, I somehow stumbled upon the post Linux-Kernel Archive: [PATCH 1/2] tracing: Reduce overhead of module tracepoints; which finally pointed to a facility present in the kernel, known as (I guess) "tracing"; the documentation for this is in the directory Documentation/trace - Linux kernel source tree. In particular, two files explain the tracing facility, events.txt and ftrace.txt.
But, there is also a short "tracing mini-HOWTO" on a running Linux system in /sys/kernel/debug/tracing/README (see also I'm really really tired of people saying that there's no documentation…); note that in the kernel source tree, this file is actually generated by the file kernel/trace/trace.c. I've tested this on Ubuntu natty, and note that since /sys is owned by root, you have to use sudo to read this file, as in sudo cat or
sudo less /sys/kernel/debug/tracing/README
... and that goes for pretty much all other operations under /sys which will be described here.
First of all, here is a simple minimal module/driver code (which I put together from the referred resources), which simply creates a /proc/testmod-sample file node, which returns the string "This is testmod." when it is being read; this is testmod.c:
/*
https://github.com/spotify/linux/blob/master/samples/tracepoints/tracepoint-sample.c
https://www.linux.com/learn/linux-training/37985-the-kernel-newbie-corner-kernel-debugging-using-proc-qsequenceq-files-part-1
*/
#include <linux/module.h>
#include <linux/sched.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h> // for sequence files
struct proc_dir_entry *pentry_sample;
char *defaultOutput = "This is testmod.";
static int my_show(struct seq_file *m, void *v)
{
seq_printf(m, "%s\n", defaultOutput);
return 0;
}
static int my_open(struct inode *inode, struct file *file)
{
return single_open(file, my_show, NULL);
}
static const struct file_operations mark_ops = {
.owner = THIS_MODULE,
.open = my_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int __init sample_init(void)
{
printk(KERN_ALERT "sample init\n");
pentry_sample = proc_create(
"testmod-sample", 0444, NULL, &mark_ops);
if (!pentry_sample)
return -EPERM;
return 0;
}
static void __exit sample_exit(void)
{
printk(KERN_ALERT "sample exit\n");
remove_proc_entry("testmod-sample", NULL);
}
module_init(sample_init);
module_exit(sample_exit);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Mathieu Desnoyers et al.");
MODULE_DESCRIPTION("based on Tracepoint sample");
This module can be built with the following Makefile (just have it placed in the same directory as testmod.c, and then run make in that same directory):
CONFIG_MODULE_FORCE_UNLOAD=y
# for oprofile
DEBUG_INFO=y
EXTRA_CFLAGS=-g -O0
obj-m += testmod.o
# mind the tab characters needed at start here:
all:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
clean:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean
When this module/driver is built, the output is a kernel object file, testmod.ko.
At this point, we can prepare the event tracing related to try_module_get and try_module_put; those are in /sys/kernel/debug/tracing/events/module:
$ sudo ls /sys/kernel/debug/tracing/events/module
enable filter module_free module_get module_load module_put module_request
Note that on my system, tracing is by default enabled:
$ sudo cat /sys/kernel/debug/tracing/tracing_enabled
1
... however, the module tracing (specifically) is not:
$ sudo cat /sys/kernel/debug/tracing/events/module/enable
0
Now, we should first make a filter, that will react on the module_get, module_put etc events, but only for the testmod module. To do that, we should first check the format of the event:
$ sudo cat /sys/kernel/debug/tracing/events/module/module_put/format
name: module_put
ID: 312
format:
...
field:__data_loc char[] name; offset:20; size:4; signed:1;
print fmt: "%s call_site=%pf refcnt=%d", __get_str(name), (void *)REC->ip, REC->refcnt
Here we can see that there is a field called name, which holds the driver name, which we can filter against. To create a filter, we simply echo the filter string into the corresponding file:
sudo bash -c "echo name == testmod > /sys/kernel/debug/tracing/events/module/filter"
Here, first note that since we have to call sudo, we have to wrap the whole echo redirection as an argument command of a sudo-ed bash. Second, note that since we wrote to the "parent" module/filter, not the specific events (which would be module/module_put/filter etc), this filter will be applied to all events listed as "children" of module directory.
Finally, we enable tracing for module:
sudo bash -c "echo 1 > /sys/kernel/debug/tracing/events/module/enable"
From this point on, we can read the trace log file; for me, reading the blocking,
"piped" version of the trace file worked - like this:
sudo cat /sys/kernel/debug/tracing/trace_pipe | tee tracelog.txt
At this point, we will not see anything in the log - so it is time to load (and utilize, and remove) the driver (in a different terminal from where trace_pipe is being read):
$ sudo insmod ./testmod.ko
$ cat /proc/testmod-sample
This is testmod.
$ sudo rmmod testmod
If we go back to the terminal where trace_pipe is being read, we should see something like:
# tracer: nop
#
# TASK-PID CPU# TIMESTAMP FUNCTION
# | | | | |
insmod-21137 [001] 28038.101509: module_load: testmod
insmod-21137 [001] 28038.103904: module_put: testmod call_site=sys_init_module refcnt=2
rmmod-21354 [000] 28080.244448: module_free: testmod
That is pretty much all we will obtain for our testmod driver - the refcount changes only when the driver is loaded (insmod) or unloaded (rmmod), not when we do a read through cat. So we can simply interrupt the read from trace_pipe with CTRL+C in that terminal; and to stop the tracing altogether:
sudo bash -c "echo 0 > /sys/kernel/debug/tracing/tracing_enabled"
Here, note that most examples refer to reading the file /sys/kernel/debug/tracing/trace instead of trace_pipe as here. However, one problem is that this file is not meant to be "piped" (so you shouldn't run a tail -f on this trace file); but instead you should re-read the trace after each operation. After the first insmod, we would obtain the same output from cat-ing both trace and trace_pipe; however, after the rmmod, reading the trace file would give:
<...>-21137 [001] 28038.101509: module_load: testmod
<...>-21137 [001] 28038.103904: module_put: testmod call_site=sys_init_module refcnt=2
rmmod-21354 [000] 28080.244448: module_free: testmod
... that is: at this point, the insmod had already been exited for long, and so it doesn't exist anymore in the process list - and therefore cannot be found via the recorded process ID (PID) at the time - thus we get a blank <...> as process name. Therefore, it is better to log (via tee) a running output from trace_pipe in this case. Also, note that in order to clear/reset/erase the trace file, one simply writes a 0 to it:
sudo bash -c "echo 0 > /sys/kernel/debug/tracing/trace"
If this seems counterintuitive, note that trace is a special file, and will always report a file size of zero anyways:
$ sudo ls -la /sys/kernel/debug/tracing/trace
-rw-r--r-- 1 root root 0 2013-03-19 06:39 /sys/kernel/debug/tracing/trace
... even if it is "full".
Finally, note that if we didn't implement a filter, we would have obtained a log of all module calls on the running system - which would log any call (also background) to grep and such, as those use the binfmt_misc module:
...
tr-6232 [001] 25149.815373: module_put: binfmt_misc call_site=search_binary_handler refcnt=133194
..
grep-6231 [001] 25149.816923: module_put: binfmt_misc call_site=search_binary_handler refcnt=133196
..
cut-6233 [000] 25149.817842: module_put: binfmt_misc call_site=search_binary_handler refcnt=129669
..
sudo-6234 [001] 25150.289519: module_put: binfmt_misc call_site=search_binary_handler refcnt=133198
..
tail-6235 [000] 25150.316002: module_put: binfmt_misc call_site=search_binary_handler refcnt=129671
... which adds quite a bit of overhead (in both log data ammount, and processing time required to generate it).
While looking this up, I stumbled upon Debugging Linux Kernel by Ftrace PDF, which refers to a tool trace-cmd, which pretty much does the similar as above - but through an easier command line interface. There is also a "front-end reader" GUI for trace-cmd called KernelShark; both of these are also in Debian/Ubuntu repositories via sudo apt-get install trace-cmd kernelshark. These tools could be an alternative to the procedure described above.
Finally, I'd just note that, while the above testmod example doesn't really show use in context of multiple claims, I have used the same tracing procedure to discover that an USB module I'm coding, was repeatedly claimed by pulseaudio as soon as the USB device was plugged in - so the procedure seems to work for such use cases.
It says on the Linux Kernel Module Programming Guide that the use count of a module is controlled by the functions try_module_get and module_put. Perhaps you can find where these functions are called for your module.
More info: https://www.kernel.org/doc/htmldocs/kernel-hacking/routines-module-use-counters.html
All you get are a list of which modules depend on which other modules (the Used by column in lsmod). You can't write a program to tell why the module was loaded, if it is still needed for anything, or what might break if you unload it and everything that depends on it.
You might try lsof or fuser.
If you use rmmod WITHOUT the --force option, it will tell you what is using a module. Example:
$ lsmod | grep firewire
firewire_ohci 24695 0
firewire_core 50151 1 firewire_ohci
crc_itu_t 1717 1 firewire_core
$ sudo modprobe -r firewire-core
FATAL: Module firewire_core is in use.
$ sudo rmmod firewire_core
ERROR: Module firewire_core is in use by firewire_ohci
$ sudo modprobe -r firewire-ohci
$ sudo modprobe -r firewire-core
$ lsmod | grep firewire
$
try kgdb and set breakpoint to your module
For anyone desperate to figure out why they can't reload modules, I was able to work around this problem by
Getting the path of the currently used module using "modinfo"
rm -rfing it
Copying the new module I wanted to load to the path it was in
Typing "modprobe DRIVER_NAME.ko".

How to generate a core dump in Linux on a segmentation fault?

I have a process in Linux that's getting a segmentation fault. How can I tell it to generate a core dump when it fails?
This depends on what shell you are using. If you are using bash, then the ulimit command controls several settings relating to program execution, such as whether you should dump core. If you type
ulimit -c unlimited
then that will tell bash that its programs can dump cores of any size. You can specify a size such as 52M instead of unlimited if you want, but in practice this shouldn't be necessary since the size of core files will probably never be an issue for you.
In tcsh, you'd type
limit coredumpsize unlimited
As explained above the real question being asked here is how to enable core dumps on a system where they are not enabled. That question is answered here.
If you've come here hoping to learn how to generate a core dump for a hung process, the answer is
gcore <pid>
if gcore is not available on your system then
kill -ABRT <pid>
Don't use kill -SEGV as that will often invoke a signal handler making it harder to diagnose the stuck process
To check where the core dumps are generated, run:
sysctl kernel.core_pattern
or:
cat /proc/sys/kernel/core_pattern
where %e is the process name and %t the system time. You can change it in /etc/sysctl.conf and reloading by sysctl -p.
If the core files are not generated (test it by: sleep 10 & and killall -SIGSEGV sleep), check the limits by: ulimit -a.
If your core file size is limited, run:
ulimit -c unlimited
to make it unlimited.
Then test again, if the core dumping is successful, you will see “(core dumped)” after the segmentation fault indication as below:
Segmentation fault: 11 (core dumped)
See also: core dumped - but core file is not in current directory?
Ubuntu
In Ubuntu the core dumps are handled by Apport and can be located in /var/crash/. However, it is disabled by default in stable releases.
For more details, please check: Where do I find the core dump in Ubuntu?.
macOS
For macOS, see: How to generate core dumps in Mac OS X?
What I did at the end was attach gdb to the process before it crashed, and then when it got the segfault I executed the generate-core-file command. That forced generation of a core dump.
Maybe you could do it this way, this program is a demonstration of how to trap a segmentation fault and shells out to a debugger (this is the original code used under AIX) and prints the stack trace up to the point of a segmentation fault. You will need to change the sprintf variable to use gdb in the case of Linux.
#include <stdio.h>
#include <signal.h>
#include <stdlib.h>
#include <stdarg.h>
static void signal_handler(int);
static void dumpstack(void);
static void cleanup(void);
void init_signals(void);
void panic(const char *, ...);
struct sigaction sigact;
char *progname;
int main(int argc, char **argv) {
char *s;
progname = *(argv);
atexit(cleanup);
init_signals();
printf("About to seg fault by assigning zero to *s\n");
*s = 0;
sigemptyset(&sigact.sa_mask);
return 0;
}
void init_signals(void) {
sigact.sa_handler = signal_handler;
sigemptyset(&sigact.sa_mask);
sigact.sa_flags = 0;
sigaction(SIGINT, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGSEGV);
sigaction(SIGSEGV, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGBUS);
sigaction(SIGBUS, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGQUIT);
sigaction(SIGQUIT, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGHUP);
sigaction(SIGHUP, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGKILL);
sigaction(SIGKILL, &sigact, (struct sigaction *)NULL);
}
static void signal_handler(int sig) {
if (sig == SIGHUP) panic("FATAL: Program hanged up\n");
if (sig == SIGSEGV || sig == SIGBUS){
dumpstack();
panic("FATAL: %s Fault. Logged StackTrace\n", (sig == SIGSEGV) ? "Segmentation" : ((sig == SIGBUS) ? "Bus" : "Unknown"));
}
if (sig == SIGQUIT) panic("QUIT signal ended program\n");
if (sig == SIGKILL) panic("KILL signal ended program\n");
if (sig == SIGINT) ;
}
void panic(const char *fmt, ...) {
char buf[50];
va_list argptr;
va_start(argptr, fmt);
vsprintf(buf, fmt, argptr);
va_end(argptr);
fprintf(stderr, buf);
exit(-1);
}
static void dumpstack(void) {
/* Got this routine from http://www.whitefang.com/unix/faq_toc.html
** Section 6.5. Modified to redirect to file to prevent clutter
*/
/* This needs to be changed... */
char dbx[160];
sprintf(dbx, "echo 'where\ndetach' | dbx -a %d > %s.dump", getpid(), progname);
/* Change the dbx to gdb */
system(dbx);
return;
}
void cleanup(void) {
sigemptyset(&sigact.sa_mask);
/* Do any cleaning up chores here */
}
You may have to additionally add a parameter to get gdb to dump the core as shown here in this blog here.
There are more things that may influence the generation of a core dump. I encountered these:
the directory for the dump must be writable. By default this is the current directory of the process, but that may be changed by setting /proc/sys/kernel/core_pattern.
in some conditions, the kernel value in /proc/sys/fs/suid_dumpable may prevent the core to be generated.
There are more situations which may prevent the generation that are described in the man page - try man core.
For Ubuntu 14.04
Check core dump enabled:
ulimit -a
One of the lines should be :
core file size (blocks, -c) unlimited
If not :
gedit ~/.bashrc and add ulimit -c unlimited to end of file and save, re-run terminal.
Build your application with debug information :
In Makefile -O0 -g
Run application that create core dump (core dump file with name ‘core’ should be created near application_name file):
./application_name
Run under gdb:
gdb application_name core
In order to activate the core dump do the following:
In /etc/profile comment the line:
# ulimit -S -c 0 > /dev/null 2>&1
In /etc/security/limits.conf comment out the line:
* soft core 0
execute the cmd limit coredumpsize unlimited and check it with cmd limit:
# limit coredumpsize unlimited
# limit
cputime unlimited
filesize unlimited
datasize unlimited
stacksize 10240 kbytes
coredumpsize unlimited
memoryuse unlimited
vmemoryuse unlimited
descriptors 1024
memorylocked 32 kbytes
maxproc 528383
#
to check if the corefile gets written you can kill the relating process with cmd kill -s SEGV <PID> (should not be needed, just in case no core file gets written this can be used as a check):
# kill -s SEGV <PID>
Once the corefile has been written make sure to deactivate the coredump settings again in the relating files (1./2./3.) !
Ubuntu 19.04
All other answers themselves didn't help me. But the following sum up did the job
Create ~/.config/apport/settings with the following content:
[main]
unpackaged=true
(This tells apport to also write core dumps for custom apps)
check: ulimit -c. If it outputs 0, fix it with
ulimit -c unlimited
Just for in case restart apport:
sudo systemctl restart apport
Crash files are now written in /var/crash/. But you cannot use them with gdb. To use them with gdb, use
apport-unpack <location_of_report> <target_directory>
Further information:
Some answers suggest changing core_pattern. Be aware, that that file might get overwritten by the apport service on restarting.
Simply stopping apport did not do the job
The ulimit -c value might get changed automatically while you're trying other answers of the web. Be sure to check it regularly during setting up your core dump creation.
References:
https://stackoverflow.com/a/47481884/6702598
By default you will get a core file. Check to see that the current directory of the process is writable, or no core file will be created.
Better to turn on core dump programmatically using system call setrlimit.
example:
#include <sys/resource.h>
bool enable_core_dump(){
struct rlimit corelim;
corelim.rlim_cur = RLIM_INFINITY;
corelim.rlim_max = RLIM_INFINITY;
return (0 == setrlimit(RLIMIT_CORE, &corelim));
}
It's worth mentioning that if you have a systemd set up, then things are a little bit different. The set up typically would have the core files be piped, by means of core_pattern sysctl value, through systemd-coredump(8). The core file size rlimit would typically be configured as "unlimited" already.
It is then possible to retrieve the core dumps using coredumpctl(1).
The storage of core dumps, etc. is configured by coredump.conf(5). There are examples of how to get the core files in the coredumpctl man page, but in short, it would look like this:
Find the core file:
[vps#phoenix]~$ coredumpctl list test_me | tail -1
Sun 2019-01-20 11:17:33 CET 16163 1224 1224 11 present /home/vps/test_me
Get the core file:
[vps#phoenix]~$ coredumpctl -o test_me.core dump 16163
This is typically sufficient:
ulimit -c unlimited
Note this will not persist between ssh sections! To add persistence:
echo '* soft core unlimited' >> /etc/security/limits.conf
Now, if you're using Ubuntu, "apport" is probably running. Here's how to check:
sudo systemctl status apport.service
If it is, you'll probably find core dumps in one of these places:
/var/lib/apport/coredump
/var/crash
If you want to change the location of core dumps
Make sure that you have the permissions to create files and the directory exists in the directory you're sending a core dump to!
Here's an example. Note this will not persist across reboots:
sysctl -w kernel.core_pattern=/coredumps/core-%e-%s-%u-%g-%p-%t
mkdir /coredumps
Make sure that the process that's crashing has access to write to this. The easiest way would be an example like this:
chmod 777 /coredumps
Test that core dumps works
> crash.c
gcc -Wl,--defsym=main=0 crash.c
./a.out
==output== Segmentation fault (core dumped)
If it doesn't say "core dumped" above, something isn't working.

Resources