I use boost in my project and I need to build it from sources.
I use the gcc 11 as compiler ans I want to build the project using -std=c++20 option.
Do I need to build boost using -std=c++20 too?
Knowing that:
Known Issues on boost:
Boost.Operators is currently incompatible with C++20 compilers, which in some cases may manifest as an infinite recursion or infinite loop in runtime when a comparison operator is called. The problem is caused by the new operator rewriting behavior introduced in C++20. As a workaround, users are advised to target C++17 or older C++ standard.
What can I do?
Related
I'm trying to update an existing configuration we have we are cross compiling for a number of targets - the question specifically here is about Android. More specifically we are building code using cmake and the hunter package manager. However we are building ICU using a link that uses autoconf/configure, called from cmake. Not sure that is specifically important except that we have less control on the use of configure than is generally the case.
OK: we have a version that builds against an old NDK but I am updating and have hit a problem identified by https://android.googlesource.com/platform/ndk/+/master/docs/UnifiedHeaders.md: with NDK16 and later, the value of the sysroot parameter needs to vary between compilation and linkage. As it stands the configure script tries to build a small program conftest.c - the program fails to link. Manually I can compile the code in two stages using -c and then linking the subsequent .o, but that is not what configure is trying to do.
Now the reality is that when I build this code, I don't actually need to link the code - I am generating a library which is used elsewhere. However that is not currently the way that configure sees it.
I may look to redo the configuration script to just check that the code can be compiled when cross compiling. However I am curious to know if anybody has managed to handle this sort of thing by keeping the existing config files and just changing the parameters by which the scripts are called.
When r19 releases to stable this problem will go away on its own (https://github.com/android-ndk/ndk/issues/780), but since that's still in beta it's not a good solution just yet.
Prior to r19 (this isn't really unique to r16+, this has always been the case and it was just asymptomatic previously), autoconf builds should be done using a standalone toolchain.
You however should not use a standalone toolchain for CMake, so odds are something about your configuration will need to change until r19 is released. Depending on the effort involved, it may make sense to keep to r15 until r19 is available.
To repeat: I'm looking for ABI compatibility between libraries of the same Visual-C++ version!
We want to mix and match some internal C++ DLLs from different teams - built at different times with different project files. Because of long build times, we exactly want to avoid large monolithic builds where each team re-compiles the source code of another team's library.
When consuming C++ DLLs with C++ interfaces it is rather clear that you only can do this if all DLLs are compiled with the same compiler / Visual Studio version.
What is not readily apparent to me is what, exactly needs to be the same to get ABI compatibility.
Obviously debug (_DEBUG) and release (NDEBUG) cannot be mixed -- but that's also apparent from the fact that these link to different versions of the shared runtime.
Do you need the exact same compiler version, or is it sufficient that the resulting DLL links to the same shared C++ runtime -- that is, basically to the same redistributable? (I think static doesn't fly when passing full C++ objects around)
Is there a documented list of compiler (and linker) options that need to be the same for two C++ DLLs of the same vc++ version to be compatible?
For example, is the same /O switch necessary - does the optimization level affect ABI compatibility´? (I'm pretty sure not.)
Or do both version have to use the same /EH switch?
Or /volatile:ms|iso ... ?
Essentially, I'd like to come up with a set of (meta-)data to associate with a Visual-C++ DLL that describes it's ABI compatibility.
If differences exist, my focus is on VS2015 only at the moment.
Have been thinking this through the last days, and what I did do was to try to see if some use-cases exists where devs have already needed to categorize their C++ build to make sure binaries are compatible.
One such place is the Native Packages from nuget. So I looked at one package there, specifically the cpprestsdk:
The binaries in the downloadable package as split like this:
native\v120\windesktop\msvcstl\dyn\rt-dyn\x64\Release\
^ ^ ^ ^ ^
VS version | not sure | uses cpp-runtime dynamically
| lib itself dynamic (as opposed to static)
or WinXP or WinApp(WinRT?)
I pulled this out from this example, because I couldn't find any other docs. I also know that the boost binaries build directory is separated in a similar way.
So, to get to a list of meta data to identify the ABI compatibility, I can preliminarily list the following:
VC version (that is, the version of the C and CPP runtime libraries used)
one point here is that e.g. vc140 should be enough nowadays - given how the CRT is linked in, all possible bugfixes to the versioned CRT components must be ABI compatible anyway, so it shouldn't matter which version a given precompiled library was built with.
pure native | managed (/CLI) | WinRT
how the CRT is consumed (statically / dynamically)
bitness / platform (Win32, x64, ARM, etc.)
Release or Debug version (i.e. which version of the CRT we link to)
plus: _ITERATOR_DEBUG_LEVEL ... if everyone goes with the defaults, fine, if a project does not, it must declare so
Additionally my best guess as to the following items:
/O must not matter - we constantly mix&match binaries with different optimization settings - specifically, this is even working for object files within the same binary
/volatile - since this is a code-gen thing, I have a hard time imagining how this could break an ABI
/EH - except for the option to disable all exception, in which case you obviously can't call anything that throws, I'm pretty confident this is save from an ABI perspective: There are possible pitfalls here, but I think they can't really be categorized into ABI compat. (Maybe some complex callback chains could be said to be ABI incompatible, not sure)
Others:
Default calling convention (/G..) : I think this would break at link time, when mangled export symbols and header declarations don't match up.
/Zc:wchar_t - will break at link time (It's actually ABI compatible, but the symbols won't macth.)
Enable RTTI (/GR) - not too sure 'bout this one - I never have worked with this disabled.
I want to write a Haskell function (module) of type: String -> String to call in Android. The easiest method seems to use JHC to generate C code, then use Android NDK to generate a shared library, but I could not find any documentation for JHC. Does JHC also use Cabal to build? Is JHC stable enough to use Parsec or Attoparsec library?
Back in 2011 I had limited success using JHC in a similar way, but targeting iOS instead of Android. Initial results were good in just getting the thing running, but we ended up discarding JHC in favor of GHC precisely because we started getting weird compile-time errors on programs that used Parsec. Bear in mind, this was in 2011 so JHC might have improved by a lot since.
If you want to give GHC a chance, I'd recommend looking at this example which uses GHC 7.8 to compile a game for Android. I haven't used it in anger yet, but I did manage to get it working on Docker, getting as far as rebuilding the game from scratch and installing it on a real Android device, so the approach definitely has merit.
UPDATE as of August 2017: Moritz Angermann has posted detailed instructions on targeting Android with a GHC cross-compiler.
Well a compiler called eta maybe the most convenient way now.it targets jvm and it will produce a jar file so you can directly put it in your project
Can I compile files (e.g. C or C++ source code) using for my android device using the arm-linux-gnueabi-* toolchain?
My question might seem a bit silly, but will I get the same result as compiling with the arm-linux-androideabi-* toolchain?
A compilation might mean more than just converting source code to binary. A compiler like GCC also provides certain libraries, in this case libgcc for handling what hardware can't handle. When a compiler becomes a toolchain, it also provides runtime libraries standardised by the programming language similar to ones provided in target system. In arm-linux-gnueabi-'s case that might be libc and for arm-linux-androideabi- that's bionic.
You can produce compatible object files to be used by different compilers, that's what elf is for.
You can produce static executable which can be mighty in size and they should work on any matching hardware/kernel, because in that case toolchains aim for that.
But if you produce dynamic executables, those ones can only run on systems that's supporting their dependencies. Because of that a simple "hello world" application that's not static build by arm-linux-gnueabi- won't work on an Android system since it provides bionic, not libc.
What is the best way to determine a pre-compiled binary's dependencies (specifically in regards to glibc and libstdc++ symbols & versions) and then ensure that a target system has these installed?
I have a limitation in that I cannot provide source code to compile on each machine (employer restriction) so the defacto response of "compile on each machine to ensure compatibility" is not suitable. I also don't wish to provide statically compiled binaries -> seems very much a case of using a hammer to open an egg.
I have considered a number of approaches which loosely center around determining the symbols/libraries my executable/library requires through use of commands such as
ldd -v </path/executable>
or
objdump -x </path/executable> | grep UND
and then somehow running a command on the target system to check if such symbols, libraries and versions are provided (not entirely certain how I do this step?).
This would then be followed by some pattern or symbol matching to ensure the correct versions, or greater, are present.
That said, I feel like this will already have been largely done for me and I'm suffering from ... "a knowledge gap ?" of how it is currently implemented.
Any thoughts/suggestions on how to proceed?
I should add that this is for the purposes of installing my software on a wide variety of linux distributions - in particular customised clusters - which may not obey distribution guidelines or standardised packaging methods. The objective being a seamless install.
I wish to accomplish binary compatibility at install time, not at a subsequent runtime, which may occur by a user with insufficient privileges to install dependencies.
Also, as I don't have source code access to all the third party libraries I use and install (specialised maths/engineering libraries) then an in-code solution does not work so well. I suppose I could write a binary that tests whether certain symbols (&versions) are present, but this binary itself would have compatibility issues to run.
I think my solution has to be to compile against older libraries (as mentioned) and install this as well as using the LSB checker (looks promising).
GNU libraries (glibc and libstdc++) support a mechanism called symbol versioning. First of all, these libraries export special symbols used by dynamic linker to resolve the appropriate symbol version (CXXABI_* and GLIBCXX_* in libstdc++, GLIBC_* in glibc). A simple script to the tune of:
nm -D libc.so.6 | grep " A "
will return a list of version symbols which can then be further shell processed to establish the maximum supported libc interface version (same works for libstdc++). From a C code, one has the option to do the same using dlvsym() (first dlopen() the library, then check whether certain minimal version of the symbols you need can be looked up using dlvsym()).
Other options for obtaining a glibc version in runtime include gnu_get_libc_version() and confstr() library calls.
However, the proper use of versioning interface is to write code which explicitly links to a specific glibc/libstdc++ library version. For example, code linking to GLIBC_2.10 interface version is expected to work with any glibc version newer than 2.10 (all versions up to 2.18 and beyond). While it is possible to enable versioning on a per symbol basis (using a ".symver" assembler/linker directive) the more reasonable approach is to set up a chroot environment using older (minimal supported) version of the toolchain and compile the project against it (it will seamlessly run with whatever newer version encountered).
Use Linux Application Checker tool ([1], [2], [3]) to check binary compatibility of your application with various Linux distributions. You can also check compatibility with your custom distribution by this tool.