Compressing the core files during core generation - linux

Is there way to compress the core files during core dump generation?
If the storage space is limited in the system, is there a way of conserving it in case of need for core dump generation with immediate compression?
Ideally the method would work on older versions of linux such as 2.6.x.

The Linux kernel /proc/sys/kernel/core_pattern file will do what you want: http://www.mjmwired.net/kernel/Documentation/sysctl/kernel.txt#191
Set the filename to something like |/bin/gzip -1 > /var/crash/core-%t-%p-%u.gz and your core files should be saved compressed for you.

For an embedded Linux systems, following script change perfectly works to generate compressed core files in 2 steps
step 1: create a script
touch /bin/gen_compress_core.sh
chmod +x /bin/gen_compress_core.sh
cat > /bin/gen_compress_core.sh #!/bin/sh exec /bin/gzip -f - >"/var/core/core-$1.$2.gz"
ctrl +d
step 2: update the core pattern file
cat > /proc/sys/kernel/core_pattern |/bin/gen_compress_core.sh %e %p ctrl+d

As suggested by other answer, the Linux kernel /proc/sys/kernel/core_pattern file is good place to start: http://www.mjmwired.net/kernel/Documentation/sysctl/kernel.txt#141
As documentation says you can specify the special character "|" which will tell kernel to output the file to script. As suggested you could use |/bin/gzip -1 > /var/crash/core-%t-%p-%u.gz as name, however it doesn't seem to work for me. I expect that the reason is that on my system kernel doesn't treat the > character as a output, rather it probably passes it as a parameter to gzip.
In order to avoid this problem, like other suggested you can create your file in some location I am using /home//crash/core.sh, create it using the following command, replacing with your user. Alternatively you can also obviously change the entire path.
echo -e '#!/bin/bash\nexec /bin/gzip -f - >"/home/<username>/crashes/core-$1-$2-$3-$4-$5.gz"' > ~/crashes/core.sh
Now this script will take 5 input parameters and concatenate them and add to core-path. The full paths must be specified in the ~/crashes/core.sh. Also the location of this script can be specified. Now lets tell kernel to use tour executable with parameters when generating file:
sudo sysctl -w kernel.core_pattern="|/home/<username>/crashes/core.sh %e %p %h %t"
Again should be replaced (or entire path to match location and name of core.sh script). Next step is to crash some program, lets create example crashing cpp file:
int main (){
int * a = nullptr;
int b = *a;
}
After compiling and running there are 2 options, either we will see:
Segmentation fault (core dumped)
Or
Segmentation fault
In case we see the latter, there are few possible reasons.
ulimit is not set, ulimit -c should specify what is limit for cores
apport or your distro core dump collector is not running, this should be investigated further
there is an error in script we wrote, I suggest than checking some basic dump path to check if the other things aren't reason the below should create /tmp/core.dump:
sudo sysctl -w kernel.core_pattern="/tmp/core.dump"
I know there is already an answer for this question however it wasn't obvious for me why it isn't working "out of the box" so I wanted to summarize my findings, hope it helps someone.

Related

no core dump in /var/crash

I am trying to understand a bit how the core dump work.
I use the test.c file to generate a core dump :
#include <stdio.h>
void foo()
{
int *ptr = 0;
*ptr = 7;
}
int main()
{
foo();
return 0;
}
I compile with
gcc test.c -o test
Which gives me the following message when I run ./test
Segmentation fault (core dumped)
My file
/proc/sys/kernel/core_pattern
contains :
|/usr/share/apport/apport %p %s %c %d %P
I checked that I have the permissions to write to the directory
/var/crash/
but after the core dump there is nothing in this folder (/var/crash/).
I am using Linux release 17.04.
Do you know what can go wrong here?
edit
I forgot to mention that I set the limits with :
ulimit -c unlimited
so the output of
ulimit -c
reads :
unlimited
I even tried to do what they say here in section How to enable apport, so I added a hash sign in front of
'problem_types': ['Bug', 'Package']
But with all of this the core dump cannot be found in /var/cash
This link contains a checklist for why coredump is not generated. Adding the list below in case link becomes inaccessible in future.
The core would have been larger than the current limit.
You don't have the necessary permissions to dump core (directory and file). Notice that core dumps are placed in the dumping process' current directory which could be different from the parent process.
Verify that the file system is writeable and have sufficient free space.
If a sub directory named core exist in the working directory no core will be dumped.
If a file named core already exist but has multiple hard links the kernel will not dump core.
Verify the permissions on the executable, if the executable has the suid or sgid bit enabled core dumps will by default be disabled. The same will be the case if you have execute permissions but no read permissions on the file.
Verify that the process has not changed working directory, core size limit, or dumpable flag.
Some kernel versions cannot dump processes with shared address space (AKA threads). Newer kernel versions can dump such processes but will append the pid to the file name.
The executable could be in a non-standard format not supporting core dumps. Each executable format must implement a core dump routine.
The segmentation fault could actually be a kernel Oops, check the system logs for any Oops messages.
The application called exit() instead of using the core dump handler.
I was also struggling to get coredumps and I had the same problem with ulimit. The session specific setting suggested by Niranjan also didn't work for me.
Finally I found the solution at https://serverfault.com/questions/216656/how-to-set-systemwide-ulimit-on-ubuntu
in /etc/security/limits.conf add:
root - core unlimited
* - core unlimited
And log out / log in.
Then
ulimit -c
on the terminal should return "unlimited" and core dumps are generated.
What filesize limit have you set for coredumps in your machine?
You can check it using
$ ulimit -c
If it is set to 0, then no coredumps will be generated - This is the default setting in most distros.
You can enable coredumps by setting it to 'unlimited' or using a specific filesize limit.
$ ulimit -c unlimited

How do I implement "file -s <file>" on Linux in pure Go?

Intent:
Does Go have the functionality (package or otherwise) to perform a special file stat on Linux akin to the command file -s <path>
Example:
[root#localhost ~]# file /proc/uptime
/proc/uptime: empty
[root#localhost ~]# file -s /proc/uptime
/proc/uptime: ASCII text
Use Case:
I have a fileglob of files in /proc/* that I need to very quickly detect if they are truly empty instead of appearing to be empty.
Using The os Package:
Code:
result,_ := os.Stat("/proc/uptime")
fmt.Println("Name:",result.Name()," Size:",result.Size()," Mode:",int(result.Mode()))
fmt.Printf("%q",result)
Result:
Name: uptime Size: 0 Mode: 292
&{"uptime" '\x00' 'Ĥ' {%!q(int64=63606896088) %!q(int32=413685520) %!q(*time.Location=&{ [] [] 0 0 <nil>})} {'\x03' %!q(uint64=4026532071) '\x01' '脤' '\x00' '\x00' '\x00' '\x00' '\x00' 'Ѐ' '\x00' {%!q(int64=1471299288) %!q(int64=413685520)} {%!q(int64=1471299288) %!q(int64=413685520)} {%!q(int64=1471299288) %!q(int64=413685520)} ['\x00' '\x00' '\x00']}}
Obvious Workaround:
There is the obvious workaround of the following. But it's a little over the top to need to call in a bash shell in order to get file stats.
output,_ := exec.Command("bash","-c","file -s","/proc/uptime").Output()
//parse output etc...
EDIT/MY PRACTICAL USE CASE:
Quickly determining which files are zero size without needing to read each one of them first.
file -s /cgroup/memory/lsf/<cluster>/*/tasks | <clean up commands> | uniq -c
6 /cgroup/memory/lsf/<cluster>/<jobid>/tasks: ASCII text
805 /cgroup/memory/lsf/<cluster>/<jobid>/tasks: empty
So in this case, I know that only those 6 jobs are running and the rest (805) have terminated. Reading the file works like this:
# cat /cgroup/memory/lsf/<cluster>/<jobid>/tasks
#
or
# cat /cgroup/memory/lsf/<cluster>/<jobid>/tasks
12352
53455
...
I'm afraid you might be confusing matters here: file is special in precisely a way it "knows" a set of heuristics to carry out its tasks.
To my knowledge, Go does not have anything like this in its standard library, and I've not came across a 3rd-party package implementing a file-like functionality (though I invite you to search by relevant keywords on http://godoc.org)
On the other hand, Go provides full access to the syscall interface of the underlying OS so when it comes to querying the OS in a way file does it, there's nothing you could not do in plain Go.
So I suggest you to just fetch the source code of file, learn what it does in its mode turned on by the "-s" command-line option and implement that in your Go code.
We'll try to have you with specific problems doing that — should you have any.
Update
Looks like I've managed to grasp the OP is struggling with: a simple check:
$ stat -c %s /proc/$$/status && wc -c < $_
0
849
That is, the stat call on a file under /proc shows it has no contents but actually reading from that file returns that contents.
OK, so the solution is simple: instead of doing a call to os.Stat() while traversing the subtree of the filesystem one should instead merely attempt to read a single byte from the file, like in:
var buf [1]byte
f, err := os.Open(fname)
if err != nil {
// do something, or maybe ignore.
// A not existing file is OK to ignore
// (the POSIX error code will be ENOENT)
// because after the `path/filepath.Walk()` fetched an entry for
// this file from its directory, the file might well have gone.
}
_, err = f.Read(buf[:])
if err != nil {
if err == io.EOF {
// OK, we failed to read 1 byte, so the file is empty.
}
// Otherwise, deal with the error
}
f.Close()
You might try to be more clever and first obtain the stat information
(using a call to os.Stat()) to see if the file is a regular file—to
not attempt reading from sockets etc.
I have a fileglob of files in /proc/* that I need to very quickly
detect if they are truly empty instead of appearing to be empty.
They are truly empty in some sense (eg. they occupy no space on file system). If you want to check whether any data can be read from them, try reading from them - that's what file -s does:
-s, --special-files
Normally, file only attempts to read and
determine the type of argument files which stat(2) reports are
ordinary files. This prevents problems, because reading special files
may have peculiar consequences. Specifying the -s option causes file
to also read argument files which are block or character special
files. This is useful for determining the filesystem types of the
data in raw disk partitions, which are block special files. This
option also causes file to disregard the file size as reported by
stat(2) since on some systems it reports a zero size for raw disk
partitions.

linux - export output from apachetop to file

Is it possible to export output from apachetop to file? Something like this: "apachetop > file", but because apachetop is running "forever", so this command is also running forever. I just need to obtain actual output from this program and handle it in my GTK# application.
Every answer will be very appreciated.
Matej.
This might work:
{ apachetop > file 2>&1 & sleep 1; kill $! ; }
but no guarantees :)
Another way using linux is to find out the /dev/vcsN device that is being used when running the program and reading from that file directly. It contains a copy of the screen data for a given VT; I'm not sure if there is a applicable device for a pty.
Well indirectly apachetop is using the access.log file to get it's data.
Look at
/var/log/apache2/access.log
You'll simply have to parse the file to get the info you're looking for!/var/log/apache2/access.log

on-the-fly output redirection, seeing the file redirection output while the program is still running

If I use a command like this one:
./program >> a.txt &
, and the program is a long running one then I can only see the output once the program ended. That means I have no way of knowing if the computation is going well until it actually stops computing. I want to be able to read the redirected output on file while the program is running.
This is similar to opening a file, appending to it, then closing it back after every writing. If the file is only closed at the end of the program then no data can be read on it until the program ends. The only redirection I know is similar to closing the file at the end of the program.
You can test it with this little python script. The language doesn't matter. Any program that writes to standard output has the same problem.
l = range(0,100000)
for i in l:
if i%1000==0:
print i
for j in l:
s = i + j
One can run this with:
./python program.py >> a.txt &
Then cat a.txt .. you will only get results once the script is done computing.
From the stdout manual page:
The stream stderr is unbuffered.
The stream stdout is line-buffered
when it points to a terminal.
Partial lines will not appear until
fflush(3) or exit(3) is called, or
a new‐line is printed.
Bottom line: Unless the output is a terminal, your program will have its standard output in fully buffered mode by default. This essentially means that it will output data in large-ish blocks, rather than line-by-line, let alone character-by-character.
Ways to work around this:
Fix your program: If you need real-time output, you need to fix your program. In C you can use fflush(stdout) after each output statement, or setvbuf() to change the buffering mode of the standard output. For Python there is sys.stdout.flush() of even some of the suggestions here.
Use a utility that can record from a PTY, rather than outright stdout redirections. GNU Screen can do this for you:
screen -d -m -L python test.py
would be a start. This will log the output of your program to a file called screenlog.0 (or similar) in your current directory with a default delay of 10 seconds, and you can use screen to connect to the session where your command is running to provide input or terminate it. The delay and the name of the logfile can be changed in a configuration file or manually once you connect to the background session.
EDIT:
On most Linux system there is a third workaround: You can use the LD_PRELOAD variable and a preloaded library to override select functions of the C library and use them to set the stdout buffering mode when those functions are called by your program. This method may work, but it has a number of disadvantages:
It won't work at all on static executables
It's fragile and rather ugly.
It won't work at all with SUID executables - the dynamic loader will refuse to read the LD_PRELOAD variable when loading such executables for security reasons.
It's fragile and rather ugly.
It requires that you find and override a library function that is called by your program after it initially sets the stdout buffering mode and preferably before any output. getenv() is a good choice for many programs, but not all. You may have to override common I/O functions such as printf() or fwrite() - if push comes to shove you may just have to override all functions that control the buffering mode and introduce a special condition for stdout.
It's fragile and rather ugly.
It's hard to ensure that there are no unwelcome side-effects. To do this right you'd have to ensure that only stdout is affected and that your overrides will not crash the rest of the program if e.g. stdout is closed.
Did I mention that it's fragile and rather ugly?
That said, the process it relatively simple. You put in a C file, e.g. linebufferedstdout.c the replacement functions:
#define _GNU_SOURCE
#include <stdlib.h>
#include <stdio.h>
#include <dlfcn.h>
char *getenv(const char *s) {
static char *(*getenv_real)(const char *s) = NULL;
if (getenv_real == NULL) {
getenv_real = dlsym(RTLD_NEXT, "getenv");
setlinebuf(stdout);
}
return getenv_real(s);
}
Then you compile that file as a shared object:
gcc -O2 -o linebufferedstdout.so -fpic -shared linebufferedstdout.c -ldl -lc
Then you set the LD_PRELOAD variable to load it along with your program:
$ LD_PRELOAD=./linebufferedstdout.so python test.py | tee -a test.out
0
1000
2000
3000
4000
If you are lucky, your problem will be solved with no unfortunate side-effects.
You can set the LD_PRELOAD library in the shell, if necessary, or even specify that library system-wide (definitely NOT recommended) in /etc/ld.so.preload.
If you're trying to modify the behavior of an existing program try stdbuf (part of coreutils starting with version 7.5 apparently).
This buffers stdout up to a line:
stdbuf -oL command > output
This disables stdout buffering altogether:
stdbuf -o0 command > output
Have you considered piping to tee?
./program | tee a.txt
However, even tee won't work if "program" doesn't write anything to stdout until it is done. So, the effectiveness depends a lot on how your program behaves.
If the program writes to a file, you can read it while it is being written using tail -f a.txt.
Your problem is that most programs check to see if the output is a terminal or not. If the output is a terminal then output is buffered one line at a time (so each line is output as it is generated) but if the output is not a terminal then the output is buffered in larger chunks (4096 bytes at a time is typical) This behaviour is normal behaviour in the C library (when using printf for example) and also in the C++ library (when using cout for example), so any program written in C or C++ will do this.
Most other scripting languages (like perl, python, etc.) are written in C or C++ and so they have exactly the same buffering behaviour.
The answer above (using LD_PRELOAD) can be made to work on perl or python scripts, since the interpreters are themselves written in C.
The unbuffer command from the expect package does exactly what you are looking for.
$ sudo apt-get install expect
$ unbuffer python program.py | cat -
<watch output immediately show up here>

valgrind : Opening several suppression files at once

I have a script which executes my unit tests using valgrind. Now the script became big, because I have maybe 10 suppression files (one per library), and it is possible that I will have to add more suppressions files.
Now instead of having a line like this :
MEMCHECK_OPTIONS="--tool=memcheck -q -v --num-callers=24 --leak-check=full --show-below-main=no --undef-value-errors=yes --leak-resolution=high --show-reachable=yes --error-limit=no --xml=yes --suppressions=$SUPPRESSION_FILES_DIR/suppression_stdlib.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_cg.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_glut.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_xlib.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_glibc.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_glib.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_qt.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_sdl.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_magick.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_sqlite.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_ld.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_selinux.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_opengl.supp"
I tried doing like this:
MEMCHECK_OPTIONS="--tool=memcheck -q -v --num-callers=24 --leak-check=full --show-below-main=no --undef-value-errors=yes --leak-resolution=high --show-reachable=yes --error-limit=no --xml=yes --suppressions=$SUPPRESSION_FILES_DIR/*.supp"
but valgrind needs a filename (doesn't accept the asterix).
Since I am doing this in a bash script, can someone tell me what is the easiest way to form that line?
I thought about listing all files in the suppression directory, then iterating over that list, and adding --suppressions= prefix.
EDIT
I forgot to ask. This is what I have so far :
ALL_SUPPRESION_FILES=`ls $SUPPRESSION_FILES_DIR/*.supp`
but I can not find how to transfer that into an array. Can someone help?
Just do it this way:
# form the list of suppression files to pass to the valgrind
VALGRIND_SUPPRESSION_FILES_LIST=""
for SUPPRESSION_FILE in $SUPPRESSION_FILES_DIR/*.supp; do
VALGRIND_SUPPRESSION_FILES_LIST+=" --suppressions=$SUPPRESSION_FILE"
done
There's no need for ls.
Here's a way to do it without a loop:
array=($SUPPRESSION_FILES_DIR/*.supp)
VALGRIND_SUPPRESSION_FILES_LIST=${array[#]/#/--suppressions=}
Neither of these work properly if filenames contain spaces, but additional steps can take care of that.
For those who still facing this problem - have a look at Valgrind Suppression File Howto.
When valgrind runs its default tool, Memcheck, it automatically tries to read a file called $PREFIX/lib/valgrind/default.supp ($PREFIX will normally be /usr). However you can make it use additional suppression files of your choice by adding --suppressions= to your command-line invocation. You can repeat this up to 100 times, which should be sufficient for most situations ;)
Rather than having to type this each time, it's more sensible to write it to an rc file. Each time it runs, valgrind looks for options in files called ~/.valgrindrc and ./.valgrindrc. [...]
Create the files if they don't already exist. So I now have a ~/.valgrindrc containing:
--memcheck:leak-check=full
--show-reachable=yes
--suppressions=/file/path/file1.supp
--suppressions=/file/path/file2.suppth/file2.supp
To check that valgrind is actually using the suppression files, run it with the -v option. The list of suppression files read is near the beginning of the output.
Well, I managed to solve the issue this way :
# form the list of suppression files to pass to the valgrind
ALL_SUPPRESION_FILES=`ls $SUPPRESSION_FILES_DIR/*.supp`
VALGRIND_SUPPRESSION_FILES_LIST=""
for SUPPRESSION_FILE in ${ALL_SUPPRESION_FILES[#]}; do
VALGRIND_SUPPRESSION_FILES_LIST="$VALGRIND_SUPPRESSION_FILES_LIST --suppressions=$SUPPRESSION_FILE"
done
I used tokenizing strings and concanating strings to form the list.

Resources