I work on a Python project that in one place callse Julia's code, and in other uses OpenCV.
Unfortunately, pyJulia prefers Python interpreter to be dynamically linked to the libpython. (I know I can build a custom Julia system image, but I fear the build delays when I want to test a development version of my Julia code from Python.)
What has worked so far, is using Spack instead of Conda. Python built by Spack has a shared libpython and Spack's repository does include a recent opencv.
Unfortunately, contrary to Conda, Spack is designed around a paradigm of compiling everything, rather than downloading binaries. The installation time of opencv is well over 1 hour, which barely is acceptable for a one-off install in the development environment, but is dismayingly long to build a Docker image.
So I have a thought: maybe it is possible to integrate my own Python with the rest of the Conda ecosystem?
This isn't a full solution, but Spack does support binary packages, as well as GitLab build pipelines to build them in parallel and keep them updated. What it does not have (yet) is a public binary mirror, so that you could install these things very quickly from pre-existing builds. That's in the works.
So, if you like the Spack approach, you can set up your own binary caches and automated builds for your dev environment.
I am not sure what the solution would be with Conda. You could make your own conda-forge packages, but I think if you deviate from the standard ones, you may end up reimplementing a lot of packages to support your use case. On the other hand, they may accept patches to make your particular configuration work.
I'm working on a project, which targets both Windows and Linux (and possible in the future MacOS). It consists of some applications with several shared libraries. It is written in modern C++ and modern CMake. It also uses 3rd-party libraries like Qt, OpenCV, Boost, GraphicsMagick, samplerate, sndfile. Those dependencies are handled through Conan package manager. I'm building both on Linux (Ubuntu 18.04, GCC 8.1) and Windows (via WSL - also Ubuntu 18.04, MinGW-w64 8.1). I'm using fairly recent versions of 3rd-party libraries with custom built options (strictly speaking - different versions than available on Ubuntu's APT, e.g. Qt v5.11.3, or custom built of GraphicsMagick)
I'm using CPack. On Windows I'm building NSIS installer, but on Linux I would like to use DEB generator (may be other if needed). All of my targets (written apps and shared libs) have appropriate CMake's INSTALL configurations, so they are copied correctly into the generated installers (component based installation). The real problem comes with packaging of 3rd-party dependencies.
Problem
Strictly speaking, I have no idea, how to do it well using CMake+CPack+Conan, both on Linux and Windows. I've read a lot of articles and posts, but I'm stucked. I would like to have something, that automatically bundles into the installer all 3rd party libraries used by project with needed plugins and, what is the most important, with needed system/compiler libraries (libgomp, libstdc++ and so on).
Possible solution
To my surprise, on Windows, this task is fairly easy, because every DLL used by app (my libs, 3rd-party libs and system/compiler libs) needs to be located where executable is. I'm engaging Conan into this, by importing all used DLLs into bin directory. In the end, in most naive way of packaging, I will just copy the bin directory into the installer and it should work. But I'm not sure, if this approach is OK.
On Linux, things are more complicated. First, there is arleady a package manager. Unfortunately, libraries/compilers available there are too old for me (e.g. on APT there is only Qt 5.9.6 ) and are built using different compile options. So, the only way for me is to ship them with my software (like in Windows). There are also issues with searching for dynamic libraries by ld, RPATH handling and so on. At the moment, the only solution I see is to write something like 'launcher' for my app, which sets LD_LIBRARY_PATH before program starts. After that, in this case we can just copy bin or lib directory to the DEB installer and this should work. But still, I don't know if this is correct approach.
Other solutions
I've also looked into other solutions. One of them was BundleUtilities from CMake. It doesn't work for me. It has a lot of problems in recognizing, whether some library is system or local one. Especially in WSL, where it stucked in processing dependencies to USER32.dll, KERNEL32.dll. BundleUtilities in Windows worked for me only with MSYS, but in MSYS I've failed to compile some 3rd-party libraries (GraphicsMagicks via Conan) and that's the reason, why I'm using WSL.
Summary
I'm looking for good and verified method of packaging C++ projects with multiple apps, libs and shipped 3rd-party libs, both for Windows and Linux. How are you doing things like this? Are you just copying bin and/or lib dirs to the installers? How (in terms of CMake/CPack code) are you doing that? INSTALL(DIRECTORY ...), or similar? I'm not sure, but I think that this problem should be already solved in the industry. ;)
Thanks for all suggestions.
First, Conan is a package manager for development, not for distribution, that's why you didn't find an easy way to solve your problem. Second, most of discussions are made at Conan issue, including bugs and questions. There you will find a big community + Conan devs which are very helpful.
with needed system/compiler libraries
This is not part of Conan. Why you don't use static linkage for system libraries?
Talking about CPack, we have an open discussion about it usage with Conan: https://github.com/conan-io/conan/issues/5655
Please, join us.
I see few options for your case:
on package method, run self.copy and all dependencies from self.cpp_deps, which includes all libraries, so you can run Cpack
Use Conan deploy generator to deploy all artifacts, and using a hook you can run cpack or any other installer tool
Out friend SSE4 is writing a new blog post about Deployment + Conan, I think it can help you a lot. You can read a preview here.
Regards!
I have written a piece of code that I would like to create a binary and distribute to other folks without having them go through the rigmarole of setting up the Haskell platform and cabal. Is there a way to statically link the binary in a cabal build?
Just run cabal build, it links statically by default.
I have a library which depends on some other libraries and of course the haskell runtime. It exports C API.
I want to build it in a way that it was fully self-contained and user wouldn't be bothered with installing haskell, cabal and all the dependencies.
it was fully self-contained and user wouldn't be bothered with installing haskell, cabal and all the dependencies
Then you must distribute your library with all its dependencies -- the Haskell compiler, runtime, C libraries, Cabal, dependent libraries. This is a non-trivial task -- you're rolling your own Haskell Platform.
You could modify the HP source and generate installers. They would be in effect standalone installers for your library.
I'm looking to install Hubris for a Ruby-to-Haskell bridge.
Recent install instructions say that I need to enable shared library support in Cabal. Are there reasons why I might not want to do that?
One reason is that when you build binaries using shared Haskell libraries, these are affected by any future breakage of your locally installed Haskell packages. In other words, when you upgrade a library, you will have to either keep the old .so files around or rebuild the program. This is the main reason why Debian is not yet providing -dyn packages for any library besides the set of boot packages.
(The fact that cabal-install does not uninstall stuff helps here a bit, I guess. But nevertheless I prefer not to worry that doing something with cabal-install or in .cabal might break existing programs.