Illegal instruction - haskell - haskell

I am building using the regular cabal build on my local machine, and the binary works fine. But when I copy the binary to another server for tests (same architecture : x86_64 and same glic and so on, as afar as I can tell) I get illegal instruction when I try to run it.
Is there some flag I should pass to cabal to make it compile a more generic binary maybe ?
Thanks

Unlike with GCC, the GHC compiler has only a handful of options to tune instruction sets, and they're all off by default. The complete list is:
-msse -msse2 -msse3 -msse4 -msse4.2 -mbmi -mbmi2 -mavx -mavx2
-mavx512cd -mavx512er -mavx512f -mavx512pf
but there's no corresponding -mno-sse or similar options to turn them off because, like I say, they're off by default. (Well, actually, on the x86_64 architecture, the -msse and -msse2 flags are technically forced on and can't be disabled.)
So, the problem is probably something else, most likely an incompatible or corrupt library. It might be helpful to run under gdb to get a backtrace and see if you can spot a suspicious library or other obvious cause.

Related

How to build a gcc compiler on Linux that builds both 32-bit and 64-bit code

I followed the directions in the following URL to build a gcc compiler for Linux:
https://solarianprogrammer.com/2016/10/07/building-gcc-ubuntu-linux/
The resulting compiler builds 64-bit code with no problems.
However, when I try to build 32-bit code (by specifying the -m32 compiler option), I get errors.
Here are the errors that I get:
cannot find -lstdc++
cannot find -lgcc_s
skipping incompatible libgcc.a when searching foor -lgcc
cannot find -lgcc
Obviously, when I built the compiler, I did something wrong - can anyone tell me what I did wrong and how I can rebuild the compiler to build both 32-bit and 64-bit code.
You at least need to configure with --with-multilib-list=m32,m64 on the configure command line.1 You definitely need to not configure with --disable-multilib. You may also need to build&install additional versions of other libraries.
In general, searching the documentation for 'multilib' will show you all the places where it talks about building or using gcc with multiple target ABIs.
1This is the default on at least some versions of gcc. You could also add mx32 if you want to experiment with that.

RISC-V RV32I soft float lib calls MUL and MULHU instructions in __muldf3

I'm using current riscv-tools to build a firmware image for the PicoRV32 core. The firmware requires floating point, so I'm using -msoft-float. This are the compiler/linker options I am using:
-Os -m32 -march=RV32I -msoft-float -ffreestanding -nostdlib -lgcc
in this configuration, __muldf3 is provided by (according to the linkers -Map output):
/opt/riscv/lib/gcc/riscv64-unknown-elf/4.9.2/soft-float/32/libgcc.a(dp-bit.o)
But this code is not compatible with the RV32I ISA: It uses the MUL and MULHU instructions!
How do I get soft-float for the plain RV32I ISA? Do I need to compile my own version of libgcc.a? Are there instructions somewhere on how to do this?
As you've noticed, the "-march=" flag only affects the current translation unit, not the libraries which get generated at toolchain build time.
Although there exist "disable-atomics"/"disable-float" configuration flags for building the toolchain, there is no multilib option for multiply/divide because they don't affect the ABI; the assumption is that the execution environment can emulate those instructions.
To the last point, the latest Privileged ISA v1.7 is designed such that you can run mul/div code and then trap into the machine-mode to emulate the mul/div instructions (you can even trap into M-mode while running in M-mode!). You'd have to provide your own mul trap handler in M-mode (probably located in your own crt0 file and linked in at compile time).
I instead recommend you try the "--with-arch" flag. A recent patch supports the --with-arch flag, so it's possible to build a gcc that by default won't generate multiply/divide. This will prevent libgcc from containing those instructions. You can try adding --with-arch=RV32I to the gcc configure line (to do so, you'll have to modify Makefile.in in the riscv-gnu-toolchain).

gcc 4.x not supporting x87 FPU math?

I've been trying to compile gcc 4.x from the sources using --with-fpmath=387 but I'm getting this error: "Invalid --with-fpmath=387". I looked in the configs and found that it doesn't support this option (even though docs still mention it as a possible option):
case ${with_fpmath} in
avx)
tm_file="${tm_file} i386/avxmath.h"
;;
sse)
tm_file="${tm_file} i386/ssemath.h"
;;
*)
echo "Invalid --with-fpmath=$with_fpmath" 1>&2
exit 1
Basically, I started this whole thing because I need to supply an executable for an old target platform (in fact, it's an old Celeron but without any SSE2 instructions that are apparently used by libstdc++ by DEFAULT). The executable crashes at the first instruction (movq XMM0,...) coming from copying routines in libstdc++ with an "Illegal instruction" message.
Is there any way to resolve this? I need to be on a fairly recent g++ to be able to port my existing code base.
I was wondering if it's possible to supply these headers/sources from an older build to enable support for regular x87 instructions, so that no SSE instructions are referenced?
UPDATE: Please note I'm talking about compiled libstdc++ having SSE2 instructions in the object code, so the question is not about gcc command line arguments. No matter what I'm supplying to gcc when compiling my code, it will link with libstdc++ that already has built-in SSE2 instructions.
The real answer is not to use ANY --with-fpmath switches when compiling GCC. I got confused by the configure script switch statement thinking that it only supports sse or avx, while, in fact, the default value (not mentioned in this switch is "387"). So make sure you don't use --with-fpmath when running configure. I recompiled GCC without it and it now works fine.
Thanks.
The argument to tell gcc to produce code for a specific target is -march=CPU where CPU is the particular cpu you want. For an old celeron, you probably want -march=pentium2 or -march=pentium3
To control the fp codegen separately, newer versions of gcc use -mfpmath= -- in your case, you want -mfpmath=387.
All of these and many others are covered in the gcc documentation
edit
In order to use those flags for building libraries (such as libstdc++) that you'll later link in to programs, you need to configure the build for the library to use the appropriate flags. libstdc++ gets built as part of the g++ build, so you'll need to do a custom build -- you can use configure CXXFLAGS=-mfpmath=387 to set extra flags to use while building things.
Please note the question was about compiled libstdc++ having SSE2 instructions in the object code, so the question was not about gcc command line arguments. No matter what I'm supplying to gcc when compiling my code, it will link with libstdc++ that already has built-in SSE2 instructions.
The real answer is not to use ANY --with-fpmath switches when compiling GCC. I got confused by the configure script switch statement thinking that it only supports sse or avx, while, in fact, the default value (not mentioned in this switch is "387"). So make sure you don't use --with-fpmath when running configure. I recompiled GCC without it and it now works fine.

how can I check Linux kernel compiler optimisation level

I'm trying to verify which optimisation level (-O?) is my linux kernel built. How can I do that?
The only thing I can find is CONFIG_CC_OPTIMIZE_FOR_SIZE=y in the kernel config file. Does it imply -Os? Does it override anything (having multiple optimisations in one gcc line makes the last -O the winner)? I have found some parts of the kernel built with -O2, but too few lines for all of the kernel.
Where is such optimisation centrally set?
Note: I'm using CentOS 5.5.
Run with make V=1 and you can see the command lines in all their glory.
If your kernel config contains CONFIG_CC_OPTIMIZE_FOR_SIZE you may assume it was compiled using -Os see the kernel makefile e.g. at http://lxr.linux.no/linux+v3.12/Makefile#L573 for the place where this get set this also shows that if CONFIG_CC_OPTIMIZE_FOR_SIZE is not set that -O2 is used.
As blueshift already said, building with make V=1 forces make to display the full compiler output including optimization flags.

Using software floating point on x86 linux

Is it (easily) possible to use software floating point on i386 linux without incurring the expense of trapping into the kernel on each call? I've tried -msoft-float, but it seems the normal (ubuntu) C libraries don't have a FP library included:
$ gcc -m32 -msoft-float -lm -o test test.c
/tmp/cc8RXn8F.o: In function `main':
test.c:(.text+0x39): undefined reference to `__muldf3'
collect2: ld returned 1 exit status
It is surprising that gcc doesn't support this natively as the code is clearly available in the source within a directory called soft-fp. It's possible to compile that library manually:
$ svn co svn://gcc.gnu.org/svn/gcc/trunk/libgcc/ libgcc
$ cd libgcc/soft-fp/
$ gcc -c -O2 -msoft-float -m32 -I../config/arm/ -I.. *.c
$ ar -crv libsoft-fp.a *.o
There are a few c files which don't compile due to errors but the majority does compile. After copying libsoft-fp.a into the directory with our source files they now compile fine with -msoft-float:
$ gcc -g -m32 -msoft-float test.c -lsoft-fp -L.
A quick inspection using
$ objdump -D --disassembler-options=intel a.out | less
shows that as expected no x87 floating point instructions are called and the code runs considerably slower as well, by a factor of 8 in my example which uses lots of division.
Note: I would've preferred to compile the soft-float library with
$ gcc -c -O2 -msoft-float -m32 -I../config/i386/ -I.. *.c
but that results in loads of error messages like
adddf3.c: In function '__adddf3':
adddf3.c:46: error: unknown register name 'st(1)' in 'asm'
Seems like the i386 version is not well maintained as st(1) points to one of the x87 registers which are obviously not available when using -msoft-float.
Strangely or luckily the arm version compiles fine on an i386 and seems to work just fine.
Unless you want to bootstrap your entire toolchain by hand, you could start with uclibc toolchain (the i386 version, I imagine) -- soft float is (AFAIK) not directly supported for "native" compilation on debian and derivatives, but it can be used via the "embedded" approach of the uclibc toolchain.
GCC does not support this without some extra libraries. From the 386 documentation:
-msoft-float Generate output containing library calls for floating
point. Warning: the requisite
libraries are not part of GCC.
Normally the facilities of the
machine's usual C compiler are used,
but this can't be done directly in
cross-compilation. You must make your
own arrangements to provide suitable
library functions for
cross-compilation.
On machines where a function returns
floating point results in the 80387
register stack, some floating point
opcodes may be emitted even if
-msoft-float is used
Also, you cannot set -mfpmath=unit to "none", it has to be sse, 387 or both.
However, according to this gnu wiki page, there is fp-soft and ieee. There is also SoftFloat.
(For ARM there is -mfloat-abi=softfp, but it does not seem like something similar is available for 386 SX).
It does not seem like tcc supports software floating point numbers either.
Good luck finding a library that works for you.
G'day,
Unless you're targetting a platform that doesn't have inbuilt FP support, I can't think of a reason why you'd want to emulate FP support.
Doesn't your x386 platform have external FPU support? Pity it's not a x486 with the FPU built in!
In my experience, any soft emulation is bound to be much slower than its hardware equivalent.
That's why I finished up writing a package in Ada to taget the onboard 68k FPU instead of using the soft emulation provided by the compiler manufacturer at the time. They finished up bundling it in their compiler as a matter of fact.
Edit: Just seen your comment below. Hmmm, if you don't need a full suite of FP support is it possible to roll your own for the few math functions you do need? That how the Ada package I mentioned got started.
HTH
cheers,

Resources