I'm trying to verify which optimisation level (-O?) is my linux kernel built. How can I do that?
The only thing I can find is CONFIG_CC_OPTIMIZE_FOR_SIZE=y in the kernel config file. Does it imply -Os? Does it override anything (having multiple optimisations in one gcc line makes the last -O the winner)? I have found some parts of the kernel built with -O2, but too few lines for all of the kernel.
Where is such optimisation centrally set?
Note: I'm using CentOS 5.5.
Run with make V=1 and you can see the command lines in all their glory.
If your kernel config contains CONFIG_CC_OPTIMIZE_FOR_SIZE you may assume it was compiled using -Os see the kernel makefile e.g. at http://lxr.linux.no/linux+v3.12/Makefile#L573 for the place where this get set this also shows that if CONFIG_CC_OPTIMIZE_FOR_SIZE is not set that -O2 is used.
As blueshift already said, building with make V=1 forces make to display the full compiler output including optimization flags.
Related
I am building using the regular cabal build on my local machine, and the binary works fine. But when I copy the binary to another server for tests (same architecture : x86_64 and same glic and so on, as afar as I can tell) I get illegal instruction when I try to run it.
Is there some flag I should pass to cabal to make it compile a more generic binary maybe ?
Thanks
Unlike with GCC, the GHC compiler has only a handful of options to tune instruction sets, and they're all off by default. The complete list is:
-msse -msse2 -msse3 -msse4 -msse4.2 -mbmi -mbmi2 -mavx -mavx2
-mavx512cd -mavx512er -mavx512f -mavx512pf
but there's no corresponding -mno-sse or similar options to turn them off because, like I say, they're off by default. (Well, actually, on the x86_64 architecture, the -msse and -msse2 flags are technically forced on and can't be disabled.)
So, the problem is probably something else, most likely an incompatible or corrupt library. It might be helpful to run under gdb to get a backtrace and see if you can spot a suspicious library or other obvious cause.
I am trying to compile my led wrapper function program file with including linux/leds.h
using including kernel space header files
gcc -I /usr/src/linux-headers-3.13.0-44-generic/include/ example.c
by compiling it flooded the console with errors in many headers file those are depended on leds.h. Can any one please help me to compile this C file which is using kernel space header files in user space.
Thanks in advance. :)
This won't work.
First of all, don't use kernel-mode headers in user-mode programs, except for the (processed?) ones provided for userspace after kernel compilation. Kernel-mode headers depend on the kernel build system to work.
I tried this, just for curiosity, although I did already knew why it won't work (tl;dr, I use the Ubuntu-patched 3.13.0-24 kernel):
$ cd /usr/src/linux-headers-3.13.0-24/
$ echo '#include <linux/leds.h>' | gcc -E -x c -o - - -Iinclude
The preprocessor claims that <asm/linkage.h> is missing, and, correct me if I'm wrong, that header is generated by the kernel build system.
If you want, you can solve this by creating a kernel module that uses <linux/leds.h> et al, then export a userspace API through the module (usually done through /proc or /sys) and use that API to implement your usermode code's logic.
Hope this helps!
Thanks KemyLand, You were right that we can not use kernel space header file in user space. But your approach couldn't work for me. firstly it asked for asm/linkage.h, i included the path of it explicitly but again compilation terminated on another header file and i did same. But at last i blocked on some errors in headers files, which were not expected as i didn't make any changes in those files. but finally i got the solution. basically we have to do Interfacing functions between kernel space and the hardware device. I had to generate make file for it. obj-m :=file_name.o and compiled it by following command make -C /usr/src/linux-headers-3.13.0-44-generic/ -C /usr/include/ M=pwd modules it generated 4 files file_name.mod.o , file_name.o, file_name.ko, file_name.mod.c. and then loaded the module as root by insmod file_name.ko. for checking the loaded module type command lsmod. I can also execute it by typing command insmod ./file_name.o or can remove by rmmod file_name
I've been trying to compile gcc 4.x from the sources using --with-fpmath=387 but I'm getting this error: "Invalid --with-fpmath=387". I looked in the configs and found that it doesn't support this option (even though docs still mention it as a possible option):
case ${with_fpmath} in
avx)
tm_file="${tm_file} i386/avxmath.h"
;;
sse)
tm_file="${tm_file} i386/ssemath.h"
;;
*)
echo "Invalid --with-fpmath=$with_fpmath" 1>&2
exit 1
Basically, I started this whole thing because I need to supply an executable for an old target platform (in fact, it's an old Celeron but without any SSE2 instructions that are apparently used by libstdc++ by DEFAULT). The executable crashes at the first instruction (movq XMM0,...) coming from copying routines in libstdc++ with an "Illegal instruction" message.
Is there any way to resolve this? I need to be on a fairly recent g++ to be able to port my existing code base.
I was wondering if it's possible to supply these headers/sources from an older build to enable support for regular x87 instructions, so that no SSE instructions are referenced?
UPDATE: Please note I'm talking about compiled libstdc++ having SSE2 instructions in the object code, so the question is not about gcc command line arguments. No matter what I'm supplying to gcc when compiling my code, it will link with libstdc++ that already has built-in SSE2 instructions.
The real answer is not to use ANY --with-fpmath switches when compiling GCC. I got confused by the configure script switch statement thinking that it only supports sse or avx, while, in fact, the default value (not mentioned in this switch is "387"). So make sure you don't use --with-fpmath when running configure. I recompiled GCC without it and it now works fine.
Thanks.
The argument to tell gcc to produce code for a specific target is -march=CPU where CPU is the particular cpu you want. For an old celeron, you probably want -march=pentium2 or -march=pentium3
To control the fp codegen separately, newer versions of gcc use -mfpmath= -- in your case, you want -mfpmath=387.
All of these and many others are covered in the gcc documentation
edit
In order to use those flags for building libraries (such as libstdc++) that you'll later link in to programs, you need to configure the build for the library to use the appropriate flags. libstdc++ gets built as part of the g++ build, so you'll need to do a custom build -- you can use configure CXXFLAGS=-mfpmath=387 to set extra flags to use while building things.
Please note the question was about compiled libstdc++ having SSE2 instructions in the object code, so the question was not about gcc command line arguments. No matter what I'm supplying to gcc when compiling my code, it will link with libstdc++ that already has built-in SSE2 instructions.
The real answer is not to use ANY --with-fpmath switches when compiling GCC. I got confused by the configure script switch statement thinking that it only supports sse or avx, while, in fact, the default value (not mentioned in this switch is "387"). So make sure you don't use --with-fpmath when running configure. I recompiled GCC without it and it now works fine.
For study purposes I'd like to test some buffer overflow exploits on an old 1.3.x version of apache webserver.
Anyway I have the stack protection on, so it doesn't work or at least I think it doesn't for this reason.
In order to disable protections I have to compile with these flags:
-fno-stack-protector -z execstack
but I don't know how to add them to apache compilation process..I never did something like this!
Can you help me?
Try:
CFLAGS="-fno-stack-protector" LDFLAGS="-z execstack" ./configure [...]
CFLAGS is for the compiler, execstack is a linker option, so it should go in LDFLAGS. Or, if supported you can get the compiler to pass the linker options -with -Wl, so:
CFLAGS="-fno-stack-protector -Wl,-z,execstack" ./configure [...]
See the INSTALL file in the Apache source archive for more details.
It's useful to inspect or compare the generated top-level Makefile, you should see your parameters in either or both of EXTRA_CFLAGS and EXTRA_LDFLAGS.
Given the task you have, if you're running a Linux distribution which has a periodic pre-linking and ASLR task, you should check that you install Apache to a path that does not get processed, otherwise your testing might be complicated when your Apache binary is "fixed" one night...
Check if prelink is installed with
dpkg -l prelink # Ubuntu/Debian derived
rpm -qv prelink # CentOS/Red Hat derived
and check the configuration (usually) in /etc/prelink.conf and one of: /etc/defaults/prelink or /etc/sysconfig/prelink .
On Ubuntu (but not on CentOS/RH) directories under /usr/local/ (bin, sbin, lib) are included for processing. If you install Apache to the default /usr/local/apache then it should be untouched, or if you want to be thorough you can add a directory blacklist (-b) line to /etc/prelink.conf
Is there some way to use ld.so.preload and cover both 32bit and 64bit binaries?
If I list both the 32bit and 64bit versions of the fault handler in ld.so.preload then the loader always complains that one of them fails to preload for whatever command I run. Not exactly earth shaking since the error is more a warning but I could certainly do without the printout.
Instead of specifying an absolute path I tried specifying simply "segv_handler.so" in the hopes that the loader would choose the lib in the arch appropriate path (a 32bit version is in /lib and a 64bit version is in /lib64).
Not likely apparently.
Is there a way to setup ld.so.preload to be architecturally aware? Or if not is there some way to turn off the error message?
This works:
put library under /path/lib for 32bit one, and put the 64bit one under /path/lib64
and they should have the same name
put the following line in /etc/ld.so.preload:
/path/$LIB/libname.so
$LIB will get the value "lib" (for 32bit) or "lib64" (for 64bit) automatically.
There's no reason to try to use ld.so.preload like this. By default ld is smart enough to know that if you're running a 64bit app to only lookup 64bit libs, and same with 32bit.
Case in point, if you have
/lib64/libawesome.so
/lib/libawesome.so
And you try
gcc -lawesome -o funtime funtime.c
It'll choose whatever the default that gcc wants to build, ld will skip libraries of incorrect bit size for that build.
gcc -m64 -lawesome -o funtime funtime.c will pick the 64bit one
gcc -m32 -lawesome -o funtime funetime.c will pick the 32bit one.
This presumes that /etc/ld.so.conf lists /lib and /lib64 by default..
Sadly, I think the answer might be "Don't do that."
From glibc, elf/rtld.c:
There usually is no ld.so.preload file, it should only be used for emergencies and testing. So the open call etc should usually fail. Using access() on a non-existing file is faster than using open(). So we do this first. If it succeeds we do almost twice the work but this does not matter, since it is not for production use.
You can provide 32 and 64 bit library using special expansion keys in the path name.
For instance you can use /lib/$PLATFORM/mylib.so and create /lib/i386/mylib.so and /lib/x86_64/mylib.so. Linked will choose the correct one for your executable.