I don't quite understand the difference between the following C/C++ compilers: GCC, MinGW, Cygwin and MSVC. Are the MinGW and Cygwin implementations of GCC or something entirely different? If I intend to compile for Windows, do I ever need anything other than the MSVC (Visual Studio) compiler?
GCC for Windows is mostly useful for makefiles and code written with gcc-specific non-portable syntax.
cygwin is not a compiler, it's a set of libraries that create a Linux-like environment inside Windows, and common linux tools compiled with those libraries.. It is useful for code written with Unixisms (expect files to behave a certain way, or assume the directory separator is /, or assume Linux paths).
If you have pure Windows code, you'll be happiest with Visual C++ (MSVC). Or, if that doesn't optimize well enough, the Intel C++ compiler.
Visual C++ also works well for portable ISO-conformant code... but Microsoft is a little behind the curve on implmenting C++11 and C++14 features. So that's another reason you might want to use gcc (or clang).
Related
Can I compile files (e.g. C or C++ source code) using for my android device using the arm-linux-gnueabi-* toolchain?
My question might seem a bit silly, but will I get the same result as compiling with the arm-linux-androideabi-* toolchain?
A compilation might mean more than just converting source code to binary. A compiler like GCC also provides certain libraries, in this case libgcc for handling what hardware can't handle. When a compiler becomes a toolchain, it also provides runtime libraries standardised by the programming language similar to ones provided in target system. In arm-linux-gnueabi-'s case that might be libc and for arm-linux-androideabi- that's bionic.
You can produce compatible object files to be used by different compilers, that's what elf is for.
You can produce static executable which can be mighty in size and they should work on any matching hardware/kernel, because in that case toolchains aim for that.
But if you produce dynamic executables, those ones can only run on systems that's supporting their dependencies. Because of that a simple "hello world" application that's not static build by arm-linux-gnueabi- won't work on an Android system since it provides bionic, not libc.
here is an interesting question that, if answered positively, would make cross compiling a whole lot easier.
Since gcc is written in C++, would it be possible to recompile the Linux gcc compiler on Windows MinGW G++ or VSC++ compiler, so that the resulting Windows executable would be able to compile c code to linux programs?
If so, what would be needed to do that?
So to simplify, here is what I want to do.
mingw32-g++ gcc.cpp -o gcc.exe
The command will probably not work because it would probably have been done before if it were that easy. What I ask is if this concept would be even possible.
Edit: thanks and expanding the question to NVCC
fvu was able to answer the question for the gcc compiler (please use the answer button next time), so if you had the same question you can thank him (or her) .
As an extention to the question, would it be possible to edit or recompile nvcc or the things it uses so that nvcc.exe can create a linux program from CUDA C code? I read that the windows variant of nvcc can only use the Visual Studio cl.exe and not MinGW or CygWin.
Is it possible to create linux programs with cl.exe? And if so, could that be used to generate linux programs with nvcc.exe?
Read the chapter on cross compiling in the gcc manual, gcc's architecture makes it quite easy to set up a toolchain where the target is different from the development machine.
I never went the exact route you describe, but I have built toolchains under Windows that target ARM9 embedded Linux machines, works like a charm - using cygwin btw. Look here for a gentle introduction. Also very useful info here.
I am not going to comment on what can be done with respect to nvcc, CUDA is somewhere on my (long) list of stuff to tinker with...
Now, can cl generate Linux binaries? The answer to this question is "sort of" : as long as the target processor is from a processor family that's supported by cl, the object files generated by it should probably not contain anything that would inhibit its execution on Linux, as they'll just contain machine code. That's the theory. However:
as Linux uses another executable format, you will need a Windows-hosted linker that understands Windows style object files (afaik, COFF), and links them together to a Linux style (ELF) executable. I never heard of such a beast, although in theory it could exist
the startup code (a tiny program that wraps around your main function) will also be different and needs to be written
and some more, eg library related issues
So, the practical answer is no, although it might be a nice summer project for a bored student :)
I am extracting code designed for an embedded system that uses math functions from NEWLIB and I would like to compile that code with Visual C++ Express Edition. However, it seems that part of the code inside NEWLIB is designed to be compiled only with GCC.
Question: Can NEWLIB be somehow modified to be compiled with a compiler other than GCC? How?
Am I asking unreasonable things here?
As an example, the following symbols are not understood by the Visual Compiler:
__extension__
__ULong
_mbtowc_state
__attribute__
Note, I would content myself if I could compile with LCC. Would this be easier?
Building newlib with MSVC would take a large porting effort. You are better off porting your code to the libc provided by MSVC. They should be mostly compatible. Simply remove newlib from you build system, MSVC will automatically link you code against it's own libc.
If you can built your code under MSVC you've probably already ported it to MSVC's libc anyway. Unless you are explicitly including headers from newlib. For example, if you include stdio.h, by default it will pickup MSVC's version unless you override this behavior to get it to use newlib's version.
I have a code that I have written in Fortran during my PhD, and now I am collaborating with some researcher that uses Linux, and they need my model, that is basically a single executable file. In the future I will probably make it open source, but up to know they just want the executable, also because they are not programmers and they have never compiled a program in their life. So the question is: is it possible to compile it on my linux machine and then send it to them in order to use it in another linux machine?Or does the linux version and distribution matter?
thank you very much
A.
If you do not use many libraries you can do that. One option is statically linking the executable (-static or similar compiler option). You need to have the static versions of all needed libraries for that. The have .a suffix. They are often not installed by default in Linux distributions and often they are not supplied in the repositories at all.
In my distrbution (OpenSuSE) they are in packages like glibc-devel-static, lapack-devel-static and similar.
The other option would be to compile the executable on a compatible distribution the users will have (GLIBC version is important) and supply all .so dynamically linked libraries they will need with your executable.
All of this assumes you use the same platform, like i586 or amd64 or arm like wallyk comments. I mostly assumed you are on a PC. You can force most compilers to produce a 32-bit or 64-bit executable by -m32 or -m64 option. You need the right version of the development libraries for that.
Does anyone know if LLVM binary compatibility is planned for visual studio combiled .obj and static .lib files?
Right now I can only link LLVM made .obj files with dynamic libs that loads a DLL at runtime (compiled from visual studio).
While there probably is very small chances that binary compatibility will happen between the two compilers, does anybody know why it is so difficult achieving this between compilers for one platform?
As Neil already said, the compatibility includes stuff like calling convention, name mangling, etc. Though these two are the smallest possible problems. LLVM already knows about all windows-specific calling conventions (stdcall, fastcall, thiscall), this is why you can call stuff from .dll's.
If we speak about C++ code then the main problem is C++ ABI: vtable layout, rtti implementation, etc. clang follows Itanium C++ ABI (which gcc use, for example, among others), VCPP - doesn't and all these are undocumented, unfortunately. There is some work going in clang in this direction, so stuff might start to work apparently. Note that most probably some parts will never be covered, e.g. seh-based exception handling on win32, because it's patented.
Linking with pure C code worked for ages, so, you might workaround these C++ ABI-related issues via C stubs / wrappers.
Apart from anything else, such as calling conventions, register usage etc, for C++ code to binary compatible the two compilers must use the same name-mangling scheme. These schemes are proprietory (so MS does not release the details if its scheme) and are in any case in a constant state of flux.