I have suddenly had a bout of confusion around Mingw-64 compilers and other compilers which are said to be 64-bit. Does this mean that the compiler is built to run on a 64 bit platform and compiles in 32-bit (this seems to be the case for all the Mingw-64 compilers I have found)? Or does it mean that it will actually copile and build 64 bit binaries.
I want to build 64 bit binaries on a 64-bit compiler and am a little confused as to whether I am actually getting 64-bit outputs despite installing a 64-bit compiler?
There are a number of versions of Ming-64 bit around, eg: tdm, ming-64.. their binary directory seems to contain wing-32 binary files?
A "64 bit compiler" will output 64 bit executables. It may or may not be 64 bit itself; MSVC++ for instance has a 64 bit compiler that's 32 bits itself.
Related
When I compile a large project (for example, Bitcoin) in both GCC (using MinGW) and in MSVC (using Visual Studio) using comparable optimization settings, the GCC binary is 6 mb and the MSVC binary is 4 mb.
Now I am wondering, does this say that MSVC produces better binaries (and I mean better as in smaller+faster)? Or doesnt this mean anything, and its just symbol-information or something unrelated to performance?
I expect a lot of comments: just benchmark it. But I'm more interested in the reason for the difference, not in the exact size/performance difference itself.
It is possible that with -o2 only, mingw may produce slower binaries than MSVC. I haven't tested and do not know. However I do know that with -march=native enabled, in my own benchmarks (http://plflib.org/colony.htm#benchmarks) mingw outperforms MSVC (with the appropriate target optimisations) by about 20%.
The main reason, I would imagine, is better customisation for individual CPU targets as opposed to MSVC's more scattershot approach. However it may be that GCC's code gen is simply better.
However, on other benchmarks MSVC might show a performance improvement. My own results are in no way definitive, but they are indicative.
Lastly I will note that yes, MSVC does produce smaller binaries in general - but watch what you #include. Including iostream in GCC/libstdc++ drags in a Ton of code, whereas in MSVC it drags in very little. And as others have said, smaller != faster necessarily.
According to this wxwidgets page on reducing executable size, Visual C++ is known to produce a smaller, faster executable, at least on Windows.
Use Microsoft Visual C++ instead of gcc (Cygwin or Mingw) on Windows.
It does produce smaller and faster executables.
Smaller is not necessarily faster. My latest compilations make extensive use of SIMD instructions that can have more than one set of instructions for one line of code, like some for AVX SIMD, some for SSE SIMD and some for SSE SISD. Then there can be significant loop unrolling (to maintain pipeline flow), with numerous repetitive instruction sequences.
Some might be following the same procedures as on Android via Eclipse, where a compile parameter, APP_ABI := all, generates code for arm64-v8a, armeabi, armeabi-v7a, mips, mips64, x86 and x86-64, selected automatically at run time.
I am teaching myself/reading up about assembly. Most of the books on assembly refer to x86- all the register names in the code begin with "e" and not "r" (as they would in x86-64). However, I use 64-bit Linux and I was wondering if these books have any value because they are not referring to x86-64.
So in short- is it really worth me using these resources to learn x86-64. Or reworded differently, besides the difference in register naming convention- are there any other differences between the two which could make learning from x86 materials difficult?
64 bit Linux allows running 32bit applications, so you still can create 32 bit applications on your computer. This way, the books and example 32 bit code are fully useful.
The only single problem you might have is if the assembly application dynamically link to some 32 bit shared library. In order to fix this you should install 32 bit compatibility layer.
The assembly programs that use only Linux system calls works fine without this layer, which is actually set of shared libraries compiled for 32 bit.
BTW, in my opinion, writing 32 bit code is still better if you want your programs to be useful for more people. There are still many 32 bit computers around and they will not disappear soon.
It's indeed a bit easier to learn assembly on 32bit since the calling conventions and stack management are simpler.
On 64bit you need to worry about ABI. Not only that but the conventions are not the same for every OSes. For instance, the ABI rules on Mac OS X are different than those on Windows (the registers are not the same and on Windows it only uses 4 registers).
You can compile your assembly code using -arch i386 with the assembler (as). With clang or gcc you can use -m32 (at least on Mac OS X, since I haven't used it on Linux proper). You won't be able to link modules that have different bitness (32bit vs 64bit).
Once you're ready to switch or compile your program for 64bit you will have to make sure that when you handle the stack you need to push 64bit words instead of 32bit ones but that kinda goes with saying.
I have some 32 bit library files (.a files) in Solaris. I am porting my application to 64 bit Linux environment. Is there any way to convert the 32 bit libraries to 64 bit or should I regenerate the libraries in 64 bit?
It is not just a question of 32-bit vs 64-bit. It's also a question of Solaris versus Linux. These are two operating systems that have different calling conventions and different ABIs. That means things like sizes of data types can be different, the way the compiler puts stuff in registers and on the stack to do a function call is different, the way system calls are done is different, etc.
It is probably possible to convert a static library in the way you want, in some cases, but you would need to write the tools yourself. Compiling from source is way easier, much more reliable, and also something you need to be able to do at will anyway (otherwise you can't easily fix problems in the library, e.g., security issues).
No; you have to recompile them for 64-bit, because a lot of necessary information is lost during the compilation.
Good luck.
I was surprised to read that Adobe discontinued the 64 bits version of Flash for Linux. While there is a new 32 bits version, and Adobe advises users to use the 32 bits version of Firefox instead.
Was wondering, as I didn't have to do that yet, is it that hard to port an application to 64 bits? Besides the libraries changes and the recompilation (settings in the Makefile), what makes the port difficult? (Flash is an example)
As noted in an Adobe blog post, Flash's ActionScript engine has a JIT compiler, that compiles the ActionScript code into native code.
x64 has a very different instruction set from x86. Therefore, making the JIT compiler generate x64 code is a non-trivial task, and is far more complicated than just making all the words 64 bits. :-)
The real kicker with porting apps to 64-bit is that every OS seems to treat the primitive types as they please. For example, under most linux environments a long is 4 bytes on a 32 bit system and 8 bytes on a 64 bit system (32bit=4bytes 64bit=8bytes.) Meanwhile, the int stays 4 bytes across 64bit and 32 bit systems. Under windows, the opposite seems to be true, the long stays consistently 4 bytes while the int switches between 32 and 64 bit.
That said, I have ported a medium sized project at work before from 32 bit to 64 bit linux (about 25,000 lines of code) only having to make changes in the the assembly code (GASM) which made several faulty assumptions about datatypes being 4 bytes long. Other than this, I had no problems, which suggests that provided you payed strict attention to your data types when you were first developing, porting should be seemless, perhaps only requiring certain compile switches be changed (like -fpic.) There were a few really bizzare corner cases that came up in my porting experience but I think they were mostly due to undefined behaviour of some GASM more than the porting itself.
If you use a lot of ints and floats, it can be amazingly complex to get it to work suitably, esp. if it is a networked app.
It took over 2 years to port xMule to 64 bits and I don't believe its parent project, eMule, has 64 bit at all.
Ideally it should be a recompilation, in reality it takes a significant amount of effort. Even if it is simple there still has to be a full sweep by the QA team to prove it works and that always takes a while.
The obvious problems I guess would be variable sizes (e.g. longs are 64 bits on most 64 bit compilers), this messes up anything that uses size related operations such as bit shifting / some pointer arithmetic. I think adobe just can't be be bothered to scan through and ensure cross compatibility. Especially when 90%+ of browser use is on 32bit versions, I know flash hasn't ever worked on the 64bit IE but even 64bit Windows 7 defaults to the 32-bit version.
Lot of information on it here if your interested:
http://www.viva64.com/content/articles/64-bit-development/?f=20_issues_of_porting_C++_code_on_the_64-bit_platform.html&lang=en&content=64-bit-development
the coding part of porting to 64bits generally isn't that hard, but can require some time and a lot of hairpulling regarding builds/libs etc. however, the real problem, especially for a widely deployed project like flash is going through the proper testing coverage to cover the many code paths and platforms. 64bit is a horizontal feature, so it can possibly break everything, so everything needs to be tested.
for flash on linux in particular, its probably more of a cost-benefit issue. is catering to the percentage of users actually use linux and 64bit worth the development costs for adobe? probably not, at this point.
I've recently upgraded my OS to Snow Leopard, which broke my GHC. I was able to fix it on one machine by adding flags for 32-bit compiles in /usr/bin/ghc (something like -optl -m32 -opta -m32 -optc -m32, gathered from here). Now I can't get it to produce 64-bit binaries for my other machine, which supports 64-bits. The 32-bit flags break, and removing them breaks as well. Any tips?
When I try to compile I get stuff like this:
/var/folders/az/az3Ef9shFZq6RajmTEBwu++++TI/-Tmp-//ghc8006_0/ghc8006_0.s:212:0:
32-bit absolute addressing is not supported for x86-64
/var/folders/az/az3Ef9shFZq6RajmTEBwu++++TI/-Tmp-//ghc8006_0/ghc8006_0.s:212:0:
cannot do signed 4 byte relocation
/var/folders/az/az3Ef9shFZq6RajmTEBwu++++TI/-Tmp-//ghc8006_0/ghc8006_0.s:215:0:
32-bit absolute addressing is not supported for x86-64
/var/folders/az/az3Ef9shFZq6RajmTEBwu++++TI/-Tmp-//ghc8006_0/ghc8006_0.s:215:0:
cannot do signed 4 byte relocation
Thanks!
64 bit Snow Leopard installers for the Haskell Platform are available, as of 2011.
http://hackage.haskell.org/platform/mac.html
My understanding is that at the moment ghc cannot generate correct 64 bit binaries under Snow Leopard. This appears to be in part because of a bug in its 64 bit link generation and in part because of a change in the native toolchain. The workaround you mention simply tells it to generate a 32 bit target and thus won't be part of any actual solution to your problem.