How to compile tight vnc with debug information enabled? - linux

I downloaded the tight vnc source code from its website. Now I am trying to use gdb on its executable. The debugger successfully adds breakpoints on functions but when I try to step through the function it says :
Single Stepping until exit from function func, which has no line number information
I think it is due to the fact that the compilation wasnt done with correct flags. I am trying to search the configuration files to understand how to enable it, but haven't been able to so far. I am not acquainted with Imakefiles etc. Maybe someone who has done this previously can help ?
Using gnu GCC and GDB on an ubuntu machine

You should compile with the -g flag.
If you are trying to learn the code, I would recommend "-g -O0". That will turn off the optimizer - gcc optimization can make it confusing to step through code.

Related

"Illegal instruction" when running ARM code targeting my CPU

I'm compiling a rather large project for ARM. I'm using an AT91SAM9G25-EK as a devboard running a Debian ARM image. All libraries and executables in the image seem to be compiled for the armv4t instruction set.
My CPU is an ARM926EJ-S, which should run armv5tej code.
I'm using GCC to cross compile for my board. My CXX flags look like the following:
set(CMAKE_CXX_FLAGS "--signed-char --sysroot=${SYSROOT} -mcpu=arm926ej-s -mtune=arm926ej-s -mfloat-abi=softfp" CACHE STRING "" FORCE)
If I try to run this on my board, I get an Illegal Instruction signal (SIGILL) during initialization of one of my dependencies (using armv4t).
If I enable thumb mode (-mthumb -mthumb-interwork) it works, but uses Thumb for all the code, which in my case runs slower (I'm doing some serious number crunching).
In this case, if I specify one function to be compiled for ARM mode (using __attribute__((target("arm")))) it will run fine until that function is called, then exit with SIGILL.
I'm lost. Is it that bad I'm linking against libraries using armv4t? Am I misunderstanding the way ARM modes work? Is it something in the linux kernel?
What softfp means is to use using the soft-float calling convention between functions, but still use the hardware FPU within them. Assuming your cross-compiler is configured with a default -mfpu option other than "none" (run arm-whatever-gcc -v and look for --with-fpu= to check), then you've got a problem, because as far as I can see from the Atmel datasheet, SAM9G25 doesn't have an FPU.
My first instinct would be to slap GDB on there, catch the signal and disassemble the offending instruction to make sure, but the fact that Thumb code works OK is already a giveaway (Thumb before ARMv6T2 doesn't include any coprocessor instructions, and thus can't make use of an FPU).
In short, use -mfloat-abi=soft to ensure the ARM code actually uses software floating-point and avoids poking a non-existent FPU. And if the "serious number crunching" involves a lot of floating-point, perhaps consider getting a different MCU...

free(): invalid pointer; ld terminated with signal 6 [Aborted], core dumped

Error
Currently using this to compile my C++ program:
x86_64-w64-mingw32-g++ -std=c++11 -c main.cpp -o main.o -I../include
x86_64-w64-mingw32-g++ main.o -o mainWin.exe -L/usr/lib/x86_64-linux-gnu/ -L/usr/local/lib -lopengl32 -lglfw32 -lGLEW -lX11 -lXxf86vm -lXrandr -lpthread -lXi -DGLEW_STATIC
I am using Mingw to compile my C++ program from Linux (Ubuntu) to a Windows executable. I am relatively new to compiling via command line, but I would like to switch my work environment completely over to Linux.
When I attempt to compile the program, I get the following error:
*** Error in `/usr/bin/x86_64-w64-mingw32-ld`: free(): invalid pointer: [removed]***
ld terminated with signal 6 [Aborted], core dumped
I believe this is because of my build of GLEW. Whenever I make it, it wants to use a mingw32msvc version of mingw. I think I need it to use x86_64-w64-mingw32-gcc. I cannot figure out how to make it do this (if even possible).
Extra
It's also worth noting that I only get this error with GLEW_STATIC defined at the top of main.cpp. Without it, I get undefined references to GLEW.
It seems that you were using the -lGLEW flag when you're supposed to use -lglew32s/lglew32! Make sure to #define GLEW STATIC if you are statically linking...and get the appropriate binaries from their website.
If the loader (or any program) is crashing, then check to see whether you are using the most recent version. If not, get hold of the newest version and try again. If that doesn't resolve it, can you find an older version and use that? If you can't easily find a version that works, you need to report the bug to the relevant team — at MinGW or the bin-utils team at GNU. Is 32-bit compilation an option? If so, try that. You're in a hole; it will probably take some digging to get yourself out.
This problem seems to occur in 2016, even though the question is from 2014. It is a little surprising that the problem has not been fixed yet — assuming that the flawed free being run into in 2016 is the same as the one that occurred in 2014. If the loader now in use dates from (say) 2013-early 2015, then there's probably an update and you should investigate it. If the loader now in use dates from mid-2015 onwards, it is more likely (or, if that's too aggressive, then it is at least possible) that it is a different bug that manifests itself similarly.
The advice to "try an upgrade if there is one available; if that doesn't work, see whether you can find a working downgrade" remains valid. It would be worth trying to create an MCVE (Minimal, Complete, and Verifiable Example) and reporting the bug to the maintenance teams — as was suggested by nodakai in 2014. The smaller the code you use, and the fewer libraries you need, the easier it will be for the maintenance teams to discover the problem and fix it. If it is a cross-compiler running on Linux for MinGW, then you still need to minimize the code and report the issue.
Note that if you can find a 'known-to-work' version, that will probably be of interest to the maintainers. It localizes where they need to look a bit.
I should note that even if the library in use is the wrong library, the loader still shouldn't crash with the free error. It can report a problem and stop under control. It should not crash. It may still be worth reporting that it does crash.
In many ways, this is just generic advice on what to do when you encounter a bug in software.
You are (and I was) using the -lGLEW flag when you're supposed to use -lglew32s/lglew32! Make sure to #define GLEW STATIC if you are statically linking... get the appropriate binaries from their website .-.

Profiling a preloaded shared library with LD_PROFILE

I'm currently trying to profile a preloaded shared library by using the LD_PROFILE environment variable.
I compile the library with "-g" flag and export LD_PROFILE_OUTPUT as well as LD_PROFILE before running an application (ncat in my case) with the preloaded library. So, more precisely what I do is the following:
Compile shared library libexample.so with "-g" flag.
export LD_PROFILE_OUTPUT=`pwd`
export LD_PROFILE=libexample.so
run LD_PRELOAD=`pwd`/libexample.so ncat ...
The preloading itself does work and my library is used, but no file libexample.so.profile gets created. If I use export LD_PROFILE=libc.so.6 instead, there is a file libc.so.6.profile as expected.
Is this a problem of combining LD_PRELOAD and LD_PROFILE or is there anything I might have done wrong?
I'm using glibc v2.12 on CentOS 6.4 if that is of any relevance.
Thanks a lot!
Sorry, I don't know the answer why LD_PROFILE does not work with LD_PRELOAD.
However, for profiling binaries compiled with -g I really like the tool valgrind together with the grapichal tool kcachegrind.
valgrind --tool=callgrind /path/to/some/binary with options
will create a file called something like callgrind.out.1234 where 1234 was the pid of the program when run. That file can be analyzed with:
kcachegrind callgrind.out.1234
In kcachegrind you will easily see in which functions most CPU time is spended, the callee map also shows this in a nise graphical way. The call graph might help to understand how the program works. You will even be able to look at the source code to see how much CPU time is spent on each line.
I hope that you will find valgrind useful even though this was not the answer to your LD_PROFILE question. The drawback of valgrind is that it slows things down both when valgrind is used for profiling and memory checking.

Extracting debugging information from core files

I've been tasked with writing a script to clean up old core files on production Linux servers. While the script is not difficult to write, I'd like to save a basic stack backtrace to a log file before removing the core files.
Being that these servers are production, and we do not have GDB or any development tools installed, I'm looking for some quick and dirty program that will give the analog of a gdb backtrace command for a multithreaded application.
Does anyone know of such a tool?
Thanks in advance.
There are a few things like this. Mostly they are incomplete relative to gdb -- for example it is uncommon for backtracers to print information about function arguments or locals, but gdb can do this. Also gdb can often unwind in cases where other unwinders choke.
Anyway, one I know of is elfutils. https://fedorahosted.org/elfutils/. It has an unwinder in development (not sure if it is in or yet, check git).
There's also libbacktrace. It is part of gcc and is designed for in-process unwinding. However, it could perhaps be adapted to core files.
There's also libunwind. I hear it is kind of terrible but YMMV.
One thing to note is that many of these require debuginfo to be available.
One last thought -- there has been a lot of work in the "catch a trace" area from the ABRT folks. ABRT uses a kernel hook to catch a core dump while it is being made. Then it does analysis by uploading the core to a server, files bugs, etc. You could maybe reuse a lot of their work. There's some other work in this space as well.
Kind of a brain dump, I hope it helps.

How to reduce compilation cost in GCC and make?

I am trying to build some big libraries, like Boost and OpenCV, from their source code via make and GCC under Ubuntu 8.10 on my laptop. Unfortunately the compilation of those big libraries seem to be big burden to my laptop (Acer Aspire 5000). Its fan makes higher and higher noises until out of a sudden my laptop shuts itself down without the OS gracefully turning off.
So I wonder how to reduce the compilation cost in case of make and GCC?
I wouldn't mind if the compilation will take much longer time or more space, as long as it can finish without my laptop shutting itself down.
Is building the debug version of libraries always less costly than building release version because there is no optimization?
Generally speaking, is it possible to just specify some part of a library to install instead of the full library? Can the rest be built and added into if later found needed?
Is it correct that if I restart my laptop, I can resume compilation from around where it was when my laptop shut itself down? For example, I noticed that it is true for OpenCV by looking at the progress percentage shown during its compilation does not restart from 0%. But I am not sure about Boost, since there is no obvious information for me to tell and the compilation seems to take much longer.
UPDATE:
Thanks, brianegge and Levy Chen! How to use the wrapper script for GCC and/or g++? Is it like defining some alias to GCC or g++? How to call a script to check sensors and wait until the CPU temperature drops before continuing?
I'd suggest creating a wrapper script for gcc and/or g++
#!/bin/bash
sleep 10
exec gcc "$#"
Save the above as "gccslow" or something, and then:
export CC="gccslow"
Alternatively, you can call the script gcc and put it at the front of your path. If you do that, be sure to include the full path in the script, otherwise, the script will call itself recursively.
A better implementation could call a script to check sensors and wait until the CPU temperature drops before continuing.
For your latter question: A well written Makefile will define dependencies as a directed a-cyclical graph (DAG), and it will try to satisfy those dependencies by compiling them in the order according to the DAG. Thus as a file is compiled, the dependency is satisfied and need not be compiled again.
It can, however, be tricky to write good Makefiles, and thus sometime the author will resort to a brute force approach, and recompile everything from scratch.
For your question, for such well known libraries, I will assume the Makefile is written properly, and that the build should resume from the last operation (with the caveat that it needs to rescan the DAG, and recalculate the compilation order, that should be relatively cheap).
Instead of compiling the whole thing, you can compile each target separately. You have to examine the Makefile for identifying them.
Tongue-in-cheek: What about putting the laptop into the fridge while compiling?

Resources