valgrind : Opening several suppression files at once - linux

I have a script which executes my unit tests using valgrind. Now the script became big, because I have maybe 10 suppression files (one per library), and it is possible that I will have to add more suppressions files.
Now instead of having a line like this :
MEMCHECK_OPTIONS="--tool=memcheck -q -v --num-callers=24 --leak-check=full --show-below-main=no --undef-value-errors=yes --leak-resolution=high --show-reachable=yes --error-limit=no --xml=yes --suppressions=$SUPPRESSION_FILES_DIR/suppression_stdlib.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_cg.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_glut.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_xlib.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_glibc.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_glib.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_qt.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_sdl.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_magick.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_sqlite.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_ld.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_selinux.supp --suppressions=$SUPPRESSION_FILES_DIR/suppression_opengl.supp"
I tried doing like this:
MEMCHECK_OPTIONS="--tool=memcheck -q -v --num-callers=24 --leak-check=full --show-below-main=no --undef-value-errors=yes --leak-resolution=high --show-reachable=yes --error-limit=no --xml=yes --suppressions=$SUPPRESSION_FILES_DIR/*.supp"
but valgrind needs a filename (doesn't accept the asterix).
Since I am doing this in a bash script, can someone tell me what is the easiest way to form that line?
I thought about listing all files in the suppression directory, then iterating over that list, and adding --suppressions= prefix.
EDIT
I forgot to ask. This is what I have so far :
ALL_SUPPRESION_FILES=`ls $SUPPRESSION_FILES_DIR/*.supp`
but I can not find how to transfer that into an array. Can someone help?

Just do it this way:
# form the list of suppression files to pass to the valgrind
VALGRIND_SUPPRESSION_FILES_LIST=""
for SUPPRESSION_FILE in $SUPPRESSION_FILES_DIR/*.supp; do
VALGRIND_SUPPRESSION_FILES_LIST+=" --suppressions=$SUPPRESSION_FILE"
done
There's no need for ls.
Here's a way to do it without a loop:
array=($SUPPRESSION_FILES_DIR/*.supp)
VALGRIND_SUPPRESSION_FILES_LIST=${array[#]/#/--suppressions=}
Neither of these work properly if filenames contain spaces, but additional steps can take care of that.

For those who still facing this problem - have a look at Valgrind Suppression File Howto.
When valgrind runs its default tool, Memcheck, it automatically tries to read a file called $PREFIX/lib/valgrind/default.supp ($PREFIX will normally be /usr). However you can make it use additional suppression files of your choice by adding --suppressions= to your command-line invocation. You can repeat this up to 100 times, which should be sufficient for most situations ;)
Rather than having to type this each time, it's more sensible to write it to an rc file. Each time it runs, valgrind looks for options in files called ~/.valgrindrc and ./.valgrindrc. [...]
Create the files if they don't already exist. So I now have a ~/.valgrindrc containing:
--memcheck:leak-check=full
--show-reachable=yes
--suppressions=/file/path/file1.supp
--suppressions=/file/path/file2.suppth/file2.supp
To check that valgrind is actually using the suppression files, run it with the -v option. The list of suppression files read is near the beginning of the output.

Well, I managed to solve the issue this way :
# form the list of suppression files to pass to the valgrind
ALL_SUPPRESION_FILES=`ls $SUPPRESSION_FILES_DIR/*.supp`
VALGRIND_SUPPRESSION_FILES_LIST=""
for SUPPRESSION_FILE in ${ALL_SUPPRESION_FILES[#]}; do
VALGRIND_SUPPRESSION_FILES_LIST="$VALGRIND_SUPPRESSION_FILES_LIST --suppressions=$SUPPRESSION_FILE"
done
I used tokenizing strings and concanating strings to form the list.

Related

Testing a modified version of readelf

I modified the readelf.c file in binutils-2.36.1/binutils/ such that it prints a few details differently with some flags such as "s","S","a" and doesn't affect the output of other flags.
I'm trying to test whether the changes I made to the file affected any other flags than the ones I intended(mentinoed above).
and therefore I generated a few tests of the following format :
./binutils/readelf -g ./readelfTests/Objects/ObjectFiles/object_1.o
./binutils/readelf -n ./readelfTests/Objects/ObjectFiles/object_1.o
./binutils/readelf -e ./readelfTests/Objects/ObjectFiles/object_1.o
./binutils/readelf -S ./readelfTests/Objects/ObjectFiles/object_1.o
and so on, you get the point.
the problem is the .o files I have are very basic with few sections and variables therefore running a test on them may not catch the errors in my code, I'd appreciate a way to get some .o files with a lot of sections and variables such that running tests on them may actually be effective.
or alternatively I'd appreciate a way to test my modified readelf file in an automatic way.

Is it possible to display a file's contents and delete that file in the same command?

I'm trying to display the output of an AWS lambda that is being captured in a temporary text file, and I want to remove that file as I display its contents. Right now I'm doing:
... && cat output.json && rm output.json
Is there a clever way to combine those last two commands into one command? My goal is to make the full combined command string as short as possible.
For cases where
it is possible to control the name of the temporary text file.
If file is not used by other code
Possible to pass "/dev/stdout" as the.name of the output
Regarding portability: see stack exchange how portable ... /dev/stdout
POSIX 7 says they are extensions.
Base Definitions,
Section 2.1.1 Requirements:
The system may provide non-standard extensions. These are features not required by POSIX.1-2008 and may include, but are not limited to:
[...]
• Additional character special files with special properties (for example,  /dev/stdin, /dev/stdout,  and  /dev/stderr)
Using the mandatory supported /dev/tty will force output into “current” terminal, making it impossible to pipe the output of the whole command into different program (or log file), or to use the program when there is no connected terminals (cron job, or other automation tools)
No, you cannot easily remove the lines of a file while displaying them. It would be highly inefficient as it would require removing characters from the beginning of a file each time you read a line. Current filesystems are pretty good at truncating lines at the end of a file, but not at the beginning.
A simple but extremely slow method would look like this:
while [ -s output.json ]
do
head -1 output.json
sed -i 1d output.json
done
While this algorithm is plain and simple, you should know that each time you remove the first line with sed -i 1d it will copy the whole content of the file but the first line into a temporary file, resulting in approximately 0.5*n² lines written in total (where n is the number of lines in your file).
In theory you could avoid this by do something like that:
while [ -s output.json ]
do
line=$(head -1 output.json)
printf -- '%s\n' "$line"
fallocate -c -o 0 -l $((${#len}+1)) output.json
done
But this does not account for variable newline characters (namely DOS-formatted newlines) and fallocate does not always work on xfs, among other issues.
Since you are trying to consume a file alongside its creation without leaving a trace of its existence on disk, you are essentially asking for a pipe functionality. In my opinion you should look into how your output.json file is produced and hopefully you can pipe it to a script of your own.

zip command not working

I am trying to zip a file using shell script command. I am using following command:
zip ./test/step1.zip $FILES
where $FILES contain all the input files. But I am getting a warning as follows
zip warning: name not matched: myfile.dat
and one more thing I observed that the file which is at last in the list of files in a folder has the above warning and that file is not getting zipped.
Can anyone explain me why this is happening? I am new to shell script world.
zip warning: name not matched: myfile.dat
This means the file myfile.dat does not exist.
You will get the same error if the file is a symlink pointing to a non-existent file.
As you say, whatever is the last file at the of $FILES, it will not be added to the zip along with the warning. So I think something's wrong with the way you create $FILES. Chances are there is a newline, carriage return, space, tab, or other invisible character at the end of the last filename, resulting in something that doesn't exist. Try this for example:
for f in $FILES; do echo :$f:; done
I bet the last line will be incorrect, for example:
:myfile.dat :
...or something like that instead of :myfile.dat: with no characters before the last :
UPDATE
If you say the script started working after running dos2unix on it, that confirms what everybody suspected already, that somehow there was a carriage-return at the end of your $FILES list.
od -c shows the \r carriage-return. Try echo $FILES | od -c
Another possible cause that can generate a zip warning: name not matched: error is having any of zip's environment variables set incorrectly.
From the man page:
ENVIRONMENT
The following environment variables are read and used by zip as described.
ZIPOPT
contains default options that will be used when running zip. The contents of this environment variable will get added to the command line just after the zip command.
ZIP
[Not on RISC OS and VMS] see ZIPOPT
Zip$Options
[RISC OS] see ZIPOPT
Zip$Exts
[RISC OS] contains extensions separated by a : that will cause native filenames with one of the specified extensions to be added to the zip file with basename and extension swapped.
ZIP_OPTS
[VMS] see ZIPOPT
In my case, I was using zip in a script and had the binary location in an environment variable ZIP so that we could change to a different zip binary easily without making tonnes of changes in the script.
Example:
ZIP=/usr/bin/zip
...
${ZIP} -r folder.zip folder
This is then processed as:
/usr/bin/zip /usr/bin/zip -r folder.zip folder
And generates the errors:
zip warning: name not matched: folder.zip
zip I/O error: Operation not permitted
zip error: Could not create output file (/usr/bin/zip.zip)
The first because it's now trying to add folder.zip to the archive instead of using it as the archive. The second and third because it's trying to use the file /usr/bin/zip.zip as the archive which is (fortunately) not writable by a normal user.
Note: This is a really old question, but I didn't find this answer anywhere, so I'm posting it to help future searchers (my future self included).
eebbesen hit the nail in his comment for my case (but i cannot vote for comment).
Another possible reason missed in the other comments is file exceeding the file size limit (4GB).
I converted my script for unix environment using dos2unix command and executed my script as ./myscript.sh instead bash myscript.sh.
I just discovered another potential cause for this. If the permissions of the directory/subdirectory don't allow the zip to find the file, it will report this error. Actually, if you run a chmod -R 444 on the directory, and then try to zip it, you will reproduce this error, and also have a "stored 0%" report, like this:
zip warning: name not matched: borrar/enviar
adding: borrar/ (stored 0%)
Hence, try changing the permissions of the file. If you are trying to send them through email, and those email filters (like Gmail's) invent silly filters of not sending executables, don't forget that making permissions very strict when making zip compression can be the cause of the error you are reporting, of "name not matched".
spaces are not allowed:
it would fail if there are more than one files(s) in $FILES unless you put them in loop
I also encountered this issue. In my case, the line separate is CRLF in my zip shell script which causes the problem. Using LF fixed it.

Handling command line options with multiple arguments for some flags

I'm writing a program where the command line usage should be something like:
mkblueprint FILE FILE FILE -o <output name> -s <string> -r <number> -p pOPT1 pOPT2 pOPT3
I'm currently using CmdLib and I can't figure out a way to handle this; a flag is required for each input(so I can't just have FILEs sitting alone) and there doesn't appear to be a way to pass multiple arguments to a flag, as with -p. These are extremely common in command line programs so I figure I'm just misunderstanding the documentation, but it's not mentioned in any command line library I look at for Haskell.
After some more work with CmdLib I was able to handle the bare FILE input via the Extra tag and then checking that each string is a valid file, which seems to be the standard way to handle it despite the name. -p pOPT1 pOPT2 pOPT3 is apparently not allowed under the POSIX standard, which is why I'm not finding libraries that will do it.
You might consider the GetOpt bindings that come with base. They're not as sexy as some of the more modern alternatives, but they support bare arguments and final options well.

Compressing the core files during core generation

Is there way to compress the core files during core dump generation?
If the storage space is limited in the system, is there a way of conserving it in case of need for core dump generation with immediate compression?
Ideally the method would work on older versions of linux such as 2.6.x.
The Linux kernel /proc/sys/kernel/core_pattern file will do what you want: http://www.mjmwired.net/kernel/Documentation/sysctl/kernel.txt#191
Set the filename to something like |/bin/gzip -1 > /var/crash/core-%t-%p-%u.gz and your core files should be saved compressed for you.
For an embedded Linux systems, following script change perfectly works to generate compressed core files in 2 steps
step 1: create a script
touch /bin/gen_compress_core.sh
chmod +x /bin/gen_compress_core.sh
cat > /bin/gen_compress_core.sh #!/bin/sh exec /bin/gzip -f - >"/var/core/core-$1.$2.gz"
ctrl +d
step 2: update the core pattern file
cat > /proc/sys/kernel/core_pattern |/bin/gen_compress_core.sh %e %p ctrl+d
As suggested by other answer, the Linux kernel /proc/sys/kernel/core_pattern file is good place to start: http://www.mjmwired.net/kernel/Documentation/sysctl/kernel.txt#141
As documentation says you can specify the special character "|" which will tell kernel to output the file to script. As suggested you could use |/bin/gzip -1 > /var/crash/core-%t-%p-%u.gz as name, however it doesn't seem to work for me. I expect that the reason is that on my system kernel doesn't treat the > character as a output, rather it probably passes it as a parameter to gzip.
In order to avoid this problem, like other suggested you can create your file in some location I am using /home//crash/core.sh, create it using the following command, replacing with your user. Alternatively you can also obviously change the entire path.
echo -e '#!/bin/bash\nexec /bin/gzip -f - >"/home/<username>/crashes/core-$1-$2-$3-$4-$5.gz"' > ~/crashes/core.sh
Now this script will take 5 input parameters and concatenate them and add to core-path. The full paths must be specified in the ~/crashes/core.sh. Also the location of this script can be specified. Now lets tell kernel to use tour executable with parameters when generating file:
sudo sysctl -w kernel.core_pattern="|/home/<username>/crashes/core.sh %e %p %h %t"
Again should be replaced (or entire path to match location and name of core.sh script). Next step is to crash some program, lets create example crashing cpp file:
int main (){
int * a = nullptr;
int b = *a;
}
After compiling and running there are 2 options, either we will see:
Segmentation fault (core dumped)
Or
Segmentation fault
In case we see the latter, there are few possible reasons.
ulimit is not set, ulimit -c should specify what is limit for cores
apport or your distro core dump collector is not running, this should be investigated further
there is an error in script we wrote, I suggest than checking some basic dump path to check if the other things aren't reason the below should create /tmp/core.dump:
sudo sysctl -w kernel.core_pattern="/tmp/core.dump"
I know there is already an answer for this question however it wasn't obvious for me why it isn't working "out of the box" so I wanted to summarize my findings, hope it helps someone.

Resources