I am moving our project repo from MSVC project files to CMake. But one special module I want to leave in .vcxproj. This seem to be possible thanks to include_external_msproject(). There are a number of issues with this command. But the most stopping is that I need to somehow define dependences.
Well, I use add_dependencies(). But it seems to be that CMake doesn't enforce dependent module to bi compiled :(
Is there any way to force dependency compilation?
Related
I am trying to compile a project and I need to use the reqwest crate, which is dependent on the ring and mime_guess crates. When using the MinGW compiler, ring fails, but with tdm-gcc, mime_guess won't build.
Is it possible to use a fallback compiler? If yes, how?
Keep the tdm-gcc directory renamed, but first in the PATH, so that when you need to build something with it, just name it properly and any programs looking for gcc.exe can find it.
I am developing a project against a custom linux and I am having troubles with dynamic dlls that are referenced by dependencies.
Is there a way to know if a dependency has dynamic linked libraries before hand? Is it possible to somehow avoid those libraries? I want to have a static binary (MUSL didn’t work for me as one dependency doesn’t compile with it).
Thanks
If you're compiling against glibc, you'll need to have at least some dynamic linking. While it is possible to statically link glibc, that isn't a supported configuration since the name service switch won't work in such a case.
In general, you should expect a build-dependency on cc or pkg-config to be an indicator of the use of a C or C++ library. That isn't a guarantee either way, but it is probably going to be the case the vast majority of the time. Some of those libraries will be able to be linked statically, but of course if you do that you must recompile your code every time any of your dependencies has a security update or you'll have a vulnerability. There's unfortunately no clear way to tell whether static linking is an option in such a case other than looking at the build.rs or the documentation of the crate.
I'm trying to update an existing configuration we have we are cross compiling for a number of targets - the question specifically here is about Android. More specifically we are building code using cmake and the hunter package manager. However we are building ICU using a link that uses autoconf/configure, called from cmake. Not sure that is specifically important except that we have less control on the use of configure than is generally the case.
OK: we have a version that builds against an old NDK but I am updating and have hit a problem identified by https://android.googlesource.com/platform/ndk/+/master/docs/UnifiedHeaders.md: with NDK16 and later, the value of the sysroot parameter needs to vary between compilation and linkage. As it stands the configure script tries to build a small program conftest.c - the program fails to link. Manually I can compile the code in two stages using -c and then linking the subsequent .o, but that is not what configure is trying to do.
Now the reality is that when I build this code, I don't actually need to link the code - I am generating a library which is used elsewhere. However that is not currently the way that configure sees it.
I may look to redo the configuration script to just check that the code can be compiled when cross compiling. However I am curious to know if anybody has managed to handle this sort of thing by keeping the existing config files and just changing the parameters by which the scripts are called.
When r19 releases to stable this problem will go away on its own (https://github.com/android-ndk/ndk/issues/780), but since that's still in beta it's not a good solution just yet.
Prior to r19 (this isn't really unique to r16+, this has always been the case and it was just asymptomatic previously), autoconf builds should be done using a standalone toolchain.
You however should not use a standalone toolchain for CMake, so odds are something about your configuration will need to change until r19 is released. Depending on the effort involved, it may make sense to keep to r15 until r19 is available.
When I was programming my project I introduced various libraries by moving them to my /libs folder and telling Gradle to compile them. Now I've noticed that even if I remove the lines for their compilation from Grade, the project still compiles and works fine.
Why? What was the point of adding them into my Gradle dependencies if I don't need them?
There could be a few cases I can think of:
Some of the dependencies you introduced later on were in turn dependent on the ones you removed. Gradle would download all the dependent libraries and so your project might be working fine.
There could be runtime dependencies on those libraries. So, removing them doesn't affect the compilation but, it might fail if someone invokes a code which depends on the library and you might see a NoClassDefFoundError
Your project used to depend on those libraries earlier but now it doesn't, so removing them doesn't cause any harm.
You added those libraries without actually checking if they were needed
Frankly speaking, all I can do is make some random guesses.
When I run cabal build it uses some Haskell compiler to build the executables and/or test-suites in my .cabal file.
Can I control which compiler is used for the different targets? Ideally, I would like to have separate build targets that use ghc and ghcjs in the same .cabal file. It seems to me that someone might want to use ghc and hugs or two version of ghc in the same project. Is this currently possible?
Also, how does cabal decide what compiler to use when running cabal build? I saw there is a compiler option in my ~/.cabal/config file but changing it from ghc to ghcjs and uncommenting it, did not seem to change what cabal build does.
The compiler to use is determined during the configure step (or during an install step's implicit configure step, which does not share configuration options with a previous configure step). It is also determined by the entity building the package and cannot be influenced by the person writing the package. Probably what happened to you is that a previous cabal build implicitly invoked the configure step and chose a compiler; future builds will keep a previous choice of compiler over one stuck in your global configuration file. You can countermand that by simply manually running cabal configure again.
It is possible to cause a build to fail with the wrong implementation, e.g.
library
if impl(ghc)
buildable: False
will prevent cabal from trying to build the package using GHC. However, this isn't really useful for building separate parts of a package with separate compilers, as cabal will refuse to install a package unless it can build the whole thing with a single compiler.
Probably the best way forward is to make separate packages for things that should be built by separate compilers.