I've tried -fvia-C and the -pgms, but none of them manage to create an executable, splurting out lots of errors like Warning: retaining unknown function ``L4' in output from C compiler.
GHC can't be used as a cross-compiler out of the box. The build system has some support for cross-compilation which we're currently working on improving. For more information, see CrossCompilation on the GHC wiki. I suggest taking further discussion to the glasgow-haskell-users or cvs-ghc mailing lists.
Related
Can I compile files (e.g. C or C++ source code) using for my android device using the arm-linux-gnueabi-* toolchain?
My question might seem a bit silly, but will I get the same result as compiling with the arm-linux-androideabi-* toolchain?
A compilation might mean more than just converting source code to binary. A compiler like GCC also provides certain libraries, in this case libgcc for handling what hardware can't handle. When a compiler becomes a toolchain, it also provides runtime libraries standardised by the programming language similar to ones provided in target system. In arm-linux-gnueabi-'s case that might be libc and for arm-linux-androideabi- that's bionic.
You can produce compatible object files to be used by different compilers, that's what elf is for.
You can produce static executable which can be mighty in size and they should work on any matching hardware/kernel, because in that case toolchains aim for that.
But if you produce dynamic executables, those ones can only run on systems that's supporting their dependencies. Because of that a simple "hello world" application that's not static build by arm-linux-gnueabi- won't work on an Android system since it provides bionic, not libc.
here is an interesting question that, if answered positively, would make cross compiling a whole lot easier.
Since gcc is written in C++, would it be possible to recompile the Linux gcc compiler on Windows MinGW G++ or VSC++ compiler, so that the resulting Windows executable would be able to compile c code to linux programs?
If so, what would be needed to do that?
So to simplify, here is what I want to do.
mingw32-g++ gcc.cpp -o gcc.exe
The command will probably not work because it would probably have been done before if it were that easy. What I ask is if this concept would be even possible.
Edit: thanks and expanding the question to NVCC
fvu was able to answer the question for the gcc compiler (please use the answer button next time), so if you had the same question you can thank him (or her) .
As an extention to the question, would it be possible to edit or recompile nvcc or the things it uses so that nvcc.exe can create a linux program from CUDA C code? I read that the windows variant of nvcc can only use the Visual Studio cl.exe and not MinGW or CygWin.
Is it possible to create linux programs with cl.exe? And if so, could that be used to generate linux programs with nvcc.exe?
Read the chapter on cross compiling in the gcc manual, gcc's architecture makes it quite easy to set up a toolchain where the target is different from the development machine.
I never went the exact route you describe, but I have built toolchains under Windows that target ARM9 embedded Linux machines, works like a charm - using cygwin btw. Look here for a gentle introduction. Also very useful info here.
I am not going to comment on what can be done with respect to nvcc, CUDA is somewhere on my (long) list of stuff to tinker with...
Now, can cl generate Linux binaries? The answer to this question is "sort of" : as long as the target processor is from a processor family that's supported by cl, the object files generated by it should probably not contain anything that would inhibit its execution on Linux, as they'll just contain machine code. That's the theory. However:
as Linux uses another executable format, you will need a Windows-hosted linker that understands Windows style object files (afaik, COFF), and links them together to a Linux style (ELF) executable. I never heard of such a beast, although in theory it could exist
the startup code (a tiny program that wraps around your main function) will also be different and needs to be written
and some more, eg library related issues
So, the practical answer is no, although it might be a nice summer project for a bored student :)
I have my project currently compiling under gcc. It uses Boost, ZeroMQ as static .a libraries and some .so libraries like SDL. I want to go clang all the way but not right now. I wonder if it is possible to compile code that uses .a and .so libraries that were compiled under gcc with clang?
Yes, you usually can use clang with GCC compiled libraries (and vice versa, use gcc with CLANG compiled libraries), because in fact it is not compilation but linking which is relevant. You might be unlucky and get unpleasant suprises.
You could in principle have some dependencies on the version of libstdc++ used to link the relevant libraries (if they are coded in C++). Actually, that usually does not matter much.
In C++, name mangling might in theory be an issue (there might be some corner cases, even incompatibilities between two different versions of g++). Again, in practice it is usually not an issue.
So usually you can mix CLANG (even different but close versions of it) with GCC but you may have unpleasant surprises. What should be expected from any C++ compiler (be it CLANG or GCC) is just to be able to compile and link an entire software (and all libraries) together using the same compiler and version (and that includes the same C++ standard library implementation). This is why upgrading a compiler in a distribution is a lot of work: the distribution makers have to ensure that all the packages compile well (and they do get surprises!).
Beware that the version of libstdc++ does matter. Both Clang & GCC communities work hard to make its ABI compatible for compiler upgrades, but there are subtle corner cases. Read the documentation of your particular and specific C++ standard library implementation. These corner cases could explain mysterious crashes when using a good C++ library binary (compiled with GCC 5) in your code compiled with GCC 8. The bug is not in the library, but the ABI evolved incompatibly.
At least for Crypto++ library this does not work (verified :-( ). So for c++ code it is less likely to work, while pure c code would probably link OK.
EDIT: The problem started appearing with Mac OS X 10.9 Mavericks and Xcode-5, which switched the default C++ library for clang from libstdc++ to libc++. It did not exist on Mac OS X 10.8 and earlier.
The solution appears to be: if you need to compile C++ code with clang, and link it to a gcc-compiled library, use "clang++ -stdlib=libstdc++". The linking is successful, and the resulting binary runs correctly.
CAVEAT: It does not seem to work the other way: even though you can build a library compiled with "clang++ -stdlib=libstdc++" and link gcc-compiled code with it, this code will crash with SEGV. So far I found the only way to link with a clang-compiled library is compiling your code with clang, not gcc.
EDIT2:
GCC-12 seems to include -stdlib= flag. Compiling with g++ -stdlib=libc++ creates Clang++-compatible object files. Very nice.
I do have an additional data point to contribute, on the topic of "unpleasant surprises" mixing code from different versions of different compilers. Therein, I link Victor Shoup's C++-based NTL number theory library with a small piece of driver code that just prints out a large factorial computed by the NTL code, a number with a decimal representation that might span multiple lines if sufficiently large.
I have built and installed SageMath (and its version of NTL) on my system running OS X 10.11.6, and also have a current installation of MacPorts. In /usr/bin I find for gcc --version
Apple LLVM version 8.0.0 (clang-800.0.42.1)
Target: x86_64-apple-darwin15.6.0
My MacPorts gcc gives
gcc (MacPorts gcc9 9.1.0_2) 9.1.0
Now, the SageMath build system requires that MacPorts be moved out of the way, so I assume SageMath builds NTL using Apple's development toolset. The SageMath build log is full of invocations of gcc. SageMath actually builds gcc from source if the system on which the makefile is run has too old a version of Apple's developer tools.
My driver code computes big factorials and uses methods of the NTL class ZZ; I initially had tested this by linking to an NTL static library I built myself, and I changed it to link to the SageMath version because I find it pleasing not to duplicate libraries. Now I understand a bit more about the pitfalls which may arise in this process.
The old makefile invoked g++ to make the executable, but this failed at linking phase with the message:
Undefined symbols for architecture x86_64:
"NTL::operator<<(std::basic_ostream<char, std::char_traits<char> >&, NTL::ZZ const&)",
referenced from:
prn_factorial(int, NTL::ZZ&) in print.o
ld: symbol(s) not found for architecture x86_64
collect2: error: ld returned 1 exit status
I had to think about this and run experiments for about 15 minutes before deciding on my own to change the makefile to invoke clang++ which in my current path invokes the MacPorts version
clang version 7.0.1 (tags/RELEASE_701/final)
Target: x86_64-apple-darwin15.6.0
Thread model: posix
InstalledDir: /opt/local/libexec/llvm-7.0/bin
This time, the makefile successfully linked and built my executable. I conclude that this represents one of those edge cases with "unpleasant surprises". Probably I should conclude that working with details of C++ is not for me; big software systems like SageMath are developed just so hobbyists don't really have to muck around with details like these.
Is there any way to produce stand alone haskell executable to run on different linux machines assuming the architecture is similar?
Sorry I should have been clearer. The other machines might not have ghc installed on them - a bit like pyinstaller for python is what I was looking for?
You can use the flags -static -optl-pthread -optl-static to avoid dynamically linked dependencies when compiling a Haskell project. This should help you run the compiled executable on two linux machines that do not have the exact same library versions.
Yes it is possible. Just like with gcc-produced binaries, you can copy them between systems assuming the dynamic libraries and platforms match.
In practice, that's a slightly higher bar than GCC binaries because GHC will dynamically link more libraries by default (ex: libgmp, unless you build GHC using integer-simple).
I am trying to compile a piece of software written in Fortran 77. I should point out that I don't know much at all about Fortran, and would really rather not start modifying the code for this software - particularly as I'm not sure what the licensing of the software is, and I don't know if I would be able to redistribute my modified version.
The code compiles fine on OS X and Windows using the g77 compiler that is (fairly easily) available for these systems. However, I cannot get it to work on my Ubuntu distribution, as I can't seem to get hold of g77 for Ubuntu anymore, and if I try and install an old version of it, it seems to muck up my entire GCC installation. I have tried compiling the code with both gfortran and g95, but it doesn't work with either as:
The code uses real variables as loop indices (yes, I know, bad idea). g95 supports this with the -freal-loops option, but gfortran doesn't.
The code uses real variables to index into arrays, which gfortran will support (with a warning), but g95 won't support.
Can anyone suggest a way to compile this code with those two 'dodgy' features using a modern and easily-available compiler such as g95 or gfortran?
Pass the argument -std=legacy to gfortran. Features removed in F95, like real loop and array indices, should compile (perhaps with a warning) in legacy mode.