How to use ccache selectively? - linux

I have to compile multiple versions of an app written in C++ and I think to use ccache for speeding up the process.
ccache howtos have examples which suggest to create symlinks named gcc, g++ etc and make sure they appear in PATH before the original gcc binaries, so ccache is used instead.
So far so good, but I'd like to use ccache only when compiling this particular app, not always.
Of course, I can write a shell script that will try to create these symlinks every time I want to compile the app and will delete them when the app is compiled. But this looks like filesystem abuse to me.
Are there better ways to use ccache selectively, not always?
For compilation of a single source code file, I could just manually call ccache instead of gcc and be done, but I have to deal with a complex app that uses an automated build system for multiple source code files.

To bypass ccache just:
export CCACHE_DISABLE=1
For more info:
man ccache
...
If you set the environment variable CCACHE_DISABLE then ccache will just call the real
compiler, bypassing the cache completely.
...

What OS? Linux? Most packaged versions of ccache already put those symlinks into a directory, for example on my Fedora machine they live in /usr/lib64/ccache.
So you could just do
PATH=/usr/lib64/ccache:${PATH} make
when you want to build with ccache.
Most packages also install a file in /etc/profile.d/ which automatically enables ccache, by adding it to the PATH as above.
If that's the case on your system, just set CCACHE_DISABLE=1 (see man ccache for more info) in your environment to disable ccache - ccache will still be run, but will simply call the real compiler.

I stumbled across this for so many times now. For me the best solution was to do this:
export CCACHE_RECACHE=1;
From the ccache manpages:
Corrupt object files
It should be noted that ccache is susceptible to general storage problems. If a bad object file sneaks into
the cache for some reason, it will of course stay bad. Some possible reasons for erroneous object files are
bad hardware (disk drive, disk controller, memory, etc), buggy drivers or file systems, a bad prefix_command
or compiler wrapper. If this happens, the easiest way of fixing it is this:
1. Build so that the bad object file ends up in the build tree.
2. Remove the bad object file from the build tree.
3. Rebuild with CCACHE_RECACHE set.

The alternative to creating symlinks is to explicitly use ccache gcc as the C compiler and ccache g++ as the C++ compiler. For instance, if your Makefile uses the variables CC and CXX to specify the compilers, you can build with make CC="ccache gcc" CXX="ccache g++" or set it up at configure time (./configure CC="ccache gcc" CXX="ccache g++").

Related

How do you create a hermetic LLVM toolchain that lives outside of /usr?

I'd like our product's builds to use a more recent version of Clang/LLVM than what is available in our distribution's package manager. I built it from source, making sure to include libc++, libc++abi, compiler-rt, clang, and lld. When configuring the project, I used -DCMAKE_INSTALL_PREFIX=/some/folder/under/my/repo.
After building, I ninja installed it. When I try to compile a file using this version of clang, and passing --sysroot /some/folder/under/my/repo, it is able to find libc++ and libc headers, but not system headers. For example, the first file that it can't find is features.h, which is under /usr/include and is not part of LLVM.
I tried symlinking a few of these and was planning to check in the symlinks, but eventually I reasoned with myself "Why bother, I can just -I/usr/include, because the sysroot would take priority anyway."
But even this is not enough. I next encountered sys/cdefs.h. This file is actually under /usr/include/x86_64-linux-gnu/sys/cdefs.h, so even a -I/usr/include isn't sufficient.
I can certainly keep going down this path and copying/symlinking every single dependency from /usr/include over to my new sysroot, but I can't help but think I'm going about this all wrong. That said, asking every engineer in the company to build clang from source and install it to /usr is a non-starter. Is there any way to check in binaries into a repo, have the build pull the compiler, headers, libs, etc straight out of the repo, but still pull non-LLVM system headers from /usr/include?

Add linker flag during conan install

I'm working in a project that uses a number of external libraries. These libraries are included using Conan. The project is primarily written for Unix, but it also need to compile and run on Windows.
My current problem is that Windows defaults fopen() to be O_TEXT, while Unix expects this to be O_BINARY. I have a fix that works for my own code, simply include binmode.obj when linking to change this default to O_BINARY.
The problem is that this does not affect my third party libraries. Googling for this didn't turn up much, most suggestions seems to be based on where you are creating your own package and want flags added, rather than how to add flags when using other's packages.
What I have tried so far:
Make binmode.obj come before libraries, in case the linking order matters. Made no difference.
Added --env 'CL=link binmode.obj' to conan install, but this flag did not end up as part of the compile flags nor link flags.
Any suggestions for what I could try?
EDIT: I was wrong about "CL" taking no effect. This was caused by confusing output. But I did observe that CL seems to be applied for both compiler and linker, which makes it somewhat challenging what flags to give. Using "/link" prefix makes it work with compiler, but does not work with linker.
EDIT 2: More confusions... I didn't realize that the syntax of the CL value was: "<compile flags> /link <link flags>". It affected compile, but not link, however. So this environment variable apparently can't be used to make Conan add a linker flag for autotools based packages.
Hi Mats L welcome to our community!
I once had a similar problem and what I end up doing was quite hacky but quite simple also:
On my conan profile located at ~/.conan/profiles/default or any other profile actually I added an enviromental variable as such:
CXX=/usr/bin/clang++ -fms-compatibility. This one made me compile all the c++ sources with this flag (that can understand windows specific code).
So in your case you can run which c++ to find the location of your compiler
and edit the CXX environmental variable in the conan profile you use your final file will probably look like :
[settings]
os=Macos
os_build=Macos
arch=x86_64
arch_build=x86_64
compiler=clang
compiler.version=11
compiler.libcxx=libc++
build_type=Release
[options]
[build_requires]
[env]
CXX=c++ --required_flag
Some additional notes: You might also want this flag set on your CC enviromental variable .
It's preferable to not change the default profile but copy it (lets say on a file named default_edited_copy) and then invoke the other profile with conan ... --profile default_edited_copy

Use private C++ runtime library on linux

In Windows, the dynamic loader always looks for modules in the path of the loaded executable first, making it possible to have private libraries without affecting system libraries.
The dynamic loader on Linux only looks for libraries in a fixed path, in the sense that it is independent on the chosen binary. I needed GCC 5 for its overflow checked arithmetic functions, but since the C++ ABI changed between 4.9 and 5, some applications became unstable and recompiling them solved the issue. While waiting for my distro [kubuntu] to upgrade the default compiler, is it possible to have newly compiled application linking to the new runtime, while packaged application still links to the old library, either by static linkage, or something that mimics the Windows behavior?
One way of emulating it would be to create a wrapper script
#!/bin/bash
LD_LIBRARY_PATH=$(dirname $(which your_file)) your_file
And after the linking step copy the affected library but it is sort of a hack.
You can use rpath.
Let's say your "new ABI" shared libraries are in /usr/local/newapi-libs.
gcc -L/usr/local/newapi-libs
-Wl,-rpath,/usr/local/newapi-libs
program.cpp -o program -lsomething`
The -rpath option of the linker is the runtime counterpart to -L. When a program compiled this way is run, the linker will first look in /usr/local/newapi-libs before searching the system library paths.
More information here and here.
You can emulate the Windows behavior of looking in the executable's directory by specifying -Wl,-rpath,.
[edit] added missing -L parameter and dashes before rpath.

compiling glibc from source with debug symbols

I need to compile glibc from source with debug symbols .
Where do i specify the '-g' option for this
How do i later make a sample code link to this particular glibc rather than the one installed on my system?
I need to compile glibc from source with debug symbols
You will have hard time compiling glibc without debug symbols. A default ./configure && make will have -g on compile line.
How do i later make a sample code link to this particular glibc rather than the one installed on my system?
This is somewhat tricky, and answered here.
It is probably a matter of configure tricks. First, try configure --help and then, either configure --enable-debug or perhaps configure CC='gcc -g' or even configure CFLAGS='-g'
For your sample code, perhaps consider playing LD_LIBRARY_PATH or LD_PRELOAD tricks (assuming linking to dynamic library).
But be very careful, since the Glibc is the cornerstone of Gnu/Linux like systems.

why after setting LD-LIBRARY_PATH and ld.so.cache properly, there are still library-finding problems?

I have a certain shared object library in a special directory which I
make sure special directory is in $LD_LIBRARY_PATH
make sure this directory has read and execute permisions for all
make sure appropriate library directory is in ld.so.conf and that root has done a ldconfig
(verify by checking for library using ldconfig -p as normaluser.
make sure it is has no soname problems (i.e. create a few symlinks if necessary)
Now, say I compile a program that needs that special library, a program packaged in a typical Open Source manner which ./configure && make, etc) and it says -lspecialibrary cannot be found, an error which a lack of any of the above checks would also probably throw.
A workaround I have done is to symlink the library to /usr/local/lib64 and suddenly the library has ben found. Also when compiling a relatively simple package, I manually add -L/path/to/spec/lib and that also has worked. But I regard those two methods as hacks, so I was looking for any clues as to why my list of checks aren't good enough to find a library.
(I particularly find the $LD_LIBRARY_PATH of shallow use. In fact I can exclude certain libraries from it, and they will still be found in a compilation process).
$LD_LIBRARY_PATH and ldconfig are only used to locate libraries when running programs that need libraries, i.e. they are used by the loader not the compiler. Your program depends on libspeciallibrary.so. When running your program $LD_LIBRARY_PATH and ldconfig are consulted to find libspeciallibary.so.
These methods are not used by your compiler to find libraries. For your compiler, the -L option is the right way to go. Since your package uses the autotools, you should set the $LDFLAGS environment variable:
LDFLAGS=-L/path/to/lib ./configure && make
This is also documented in the configure help:
./configure --help

Resources