How to store same library version compiled differently in Linux? - linux

I have two different Qt projects, one of which needs to be compiled with Qt 4.8 and the other being compiled with Qt 5.4. Both need to use the library QDeviceWatcher compiled in accordance to their versions (i.e. the app under Qt 5 should have QDeviceWatcher compiled for Qt 5 and the one using Qt 4.8 needs QDeviceWatcher compiled with Qt 4.8).
The problem is that, under Linux, the .so library files are supposed to be located in the same "global" folder, /usr/local/lib. Since I'm working on both projects, I'ld have to have both groups of .so files located in that folder - what simply is not possible given that they have the same name and the last addition would simply overwrite the previous one.
As for now what I'm doing is each time I change the project to be compiled, I update the library files with the compilation I need, but that is obviously undesirable.
Is there any way to counter this problem?
The only two ways I came across was to create a fake new version of the library (currently is 2.0.0, so I could create version 2.0.1) and compile each lib version for one different Qt version (but of course that would be messy, I mean, it's a fake version!) or to locate the .so files in a directory close to the project so the files would be looked after there instead of in the global dir /usr/local/lib. But that, seems to me, breaks down the whole idea of having the library available globally for all and new applications. And Google didn't help me with this.

Given that QDeviceWatcher is a 3rd party app what I would do is install it outside of the /usr/local/lib folder and into the project directory instead, and update my .pro to point to it directly.

Related

Difference between popular CMake build system and genproj tool for OpenCASCADE

While exploring about the platform setup for OpenCASCADE, I came to know about WOK commands which arent needed for CMake build system to use with OpenCASCADE
However another option of genproj tool (for which I havent yet found any exe but DLLs..) to be used with MSVC+ in built compiler so that we dont need any gcc installation
Whats the difference between the twos and which one is better and easier??
Also suggest me how to download and install and setup genproj on windows
OCCT project provided the following build systems:
CMake. This is the main building system since OCCT 7.0.0.
It allows building OCCT for almost every supported target platform.
WOK. This was an in-house building system used by OCCT before 7.0.0 release.
The tool handled classes defined in CDL (CAS.CADE definition language) files (WOK generated C++ header files from CDL) and supported building in a distributed environment (e.g. local WOK setup builds only modified source files and reused unmodified binary / object files from local network). WOK support has been discontinued since OCCT 7.5.0 and unlikely will be able building up-to-date OCCT sources (although project structure remains compatible with WOK).
genproj. This is a Tcl script allowing to generate project for building OCCT using Visual Studio (2010+), Code::Blocks, XCode and Qt Creator. This script has been initially extracted from WOK package (where it was implemented as command wgenproj in it's shell) and now maintained independently from it.
qmake. Experimental adm/qmake solution can be opened directly from QtCreator without CMake plugin (the project files will be generated recursively by qmake). Although header files generation (filling in inc folder) still should be done using genproj (qmake scripting capabilities were found too limited for this staff).
genproj doesn't require any DLLs or EXE files - it comes with OCCT itself and requires Tcl interpreter. On Windows platform it can be executed with genconf.bat and genproj.bat batch scripts in the root of OCCT source code folder. At first launch it will ask to put a path to tclsh.exe.
While CMake is the main building tool for OCCT project, genproj remains maintained and used by (some) developers - mostly due to personal habits and hatred to CMake. They differences of genproj from CMake that could be considered as advantages in some cases:
Generated project files can be moved to another location / computer without necessity to re-generate them.
Simplified 3rd-party dependency search tool genconf with GUI based on Tcl/Tk.
Batch-script environment/configuration files (env.bat and custom.bat), although CMake script in OCCT emulates similar files.
Generated Visual Studio solution contains Debug+Release and 32bit/64bit configurations.
Running Draw Harness and regression tests can be started directly from Visual Studio (without building any INSTALL target).
No problems with CMakeCache.txt.
Limitations of genproj:
No CMake configuration files. Other CMake-based projects would not be able re-using configuration files to simplify 3rd-party setup.
Regeneration of project files should be called explicitly.
Out-of-source builds are not supported (however, each configuration is put into dedicated sub-folder).
No INSTALL target.
No PCH (pre-compiler header file) generation.
It should be noted, that several attempts have been done to make compiler / linker flags consistent between CMake and genproj, but in reality they may be different.

Application deployment with 3rd-party dependencies for both Linux and Windows, using CMake and Conan

I'm working on a project, which targets both Windows and Linux (and possible in the future MacOS). It consists of some applications with several shared libraries. It is written in modern C++ and modern CMake. It also uses 3rd-party libraries like Qt, OpenCV, Boost, GraphicsMagick, samplerate, sndfile. Those dependencies are handled through Conan package manager. I'm building both on Linux (Ubuntu 18.04, GCC 8.1) and Windows (via WSL - also Ubuntu 18.04, MinGW-w64 8.1). I'm using fairly recent versions of 3rd-party libraries with custom built options (strictly speaking - different versions than available on Ubuntu's APT, e.g. Qt v5.11.3, or custom built of GraphicsMagick)
I'm using CPack. On Windows I'm building NSIS installer, but on Linux I would like to use DEB generator (may be other if needed). All of my targets (written apps and shared libs) have appropriate CMake's INSTALL configurations, so they are copied correctly into the generated installers (component based installation). The real problem comes with packaging of 3rd-party dependencies.
Problem
Strictly speaking, I have no idea, how to do it well using CMake+CPack+Conan, both on Linux and Windows. I've read a lot of articles and posts, but I'm stucked. I would like to have something, that automatically bundles into the installer all 3rd party libraries used by project with needed plugins and, what is the most important, with needed system/compiler libraries (libgomp, libstdc++ and so on).
Possible solution
To my surprise, on Windows, this task is fairly easy, because every DLL used by app (my libs, 3rd-party libs and system/compiler libs) needs to be located where executable is. I'm engaging Conan into this, by importing all used DLLs into bin directory. In the end, in most naive way of packaging, I will just copy the bin directory into the installer and it should work. But I'm not sure, if this approach is OK.
On Linux, things are more complicated. First, there is arleady a package manager. Unfortunately, libraries/compilers available there are too old for me (e.g. on APT there is only Qt 5.9.6 ) and are built using different compile options. So, the only way for me is to ship them with my software (like in Windows). There are also issues with searching for dynamic libraries by ld, RPATH handling and so on. At the moment, the only solution I see is to write something like 'launcher' for my app, which sets LD_LIBRARY_PATH before program starts. After that, in this case we can just copy bin or lib directory to the DEB installer and this should work. But still, I don't know if this is correct approach.
Other solutions
I've also looked into other solutions. One of them was BundleUtilities from CMake. It doesn't work for me. It has a lot of problems in recognizing, whether some library is system or local one. Especially in WSL, where it stucked in processing dependencies to USER32.dll, KERNEL32.dll. BundleUtilities in Windows worked for me only with MSYS, but in MSYS I've failed to compile some 3rd-party libraries (GraphicsMagicks via Conan) and that's the reason, why I'm using WSL.
Summary
I'm looking for good and verified method of packaging C++ projects with multiple apps, libs and shipped 3rd-party libs, both for Windows and Linux. How are you doing things like this? Are you just copying bin and/or lib dirs to the installers? How (in terms of CMake/CPack code) are you doing that? INSTALL(DIRECTORY ...), or similar? I'm not sure, but I think that this problem should be already solved in the industry. ;)
Thanks for all suggestions.
First, Conan is a package manager for development, not for distribution, that's why you didn't find an easy way to solve your problem. Second, most of discussions are made at Conan issue, including bugs and questions. There you will find a big community + Conan devs which are very helpful.
with needed system/compiler libraries
This is not part of Conan. Why you don't use static linkage for system libraries?
Talking about CPack, we have an open discussion about it usage with Conan: https://github.com/conan-io/conan/issues/5655
Please, join us.
I see few options for your case:
on package method, run self.copy and all dependencies from self.cpp_deps, which includes all libraries, so you can run Cpack
Use Conan deploy generator to deploy all artifacts, and using a hook you can run cpack or any other installer tool
Out friend SSE4 is writing a new blog post about Deployment + Conan, I think it can help you a lot. You can read a preview here.
Regards!

How to ship an openfl desktop (windows) application

I created a haxe project using openfl. I can build and run the project on my own desktop using lime test windows.
Now how do I actually ship this project to other users?
I tried simply zipping the binary output created by running the command above. When I then unpack this zip on a different computer and start the executable file it will complain that I'm missing certain .dll files (more specifically the libstdc++-6.dll file).
Although this is not a direct answer to your question, a solution to this issue would be to compile through another software/tool, such as FlashDevelop.
For a few versions already, FlashDevelop includes an App manager feature that allows you to easily install the latest versions of Haxe, Lime & OpenFl (in an all-in-one package), and compile for all the Haxe/Lime/OpenFl targets seamlessly by just switching a value in a drop-down menu.
This allowed me to compile without any problems native C++ or Neko versions of my projects, thus embedding all the necessary files that could be zipped and sent to other computers.

Linux accommodates multiple versions of a shared library but what about includes?

I'm trying to understand how shared library versions are managed under Linux and how these interact with different versions of include files when configuring and compiling a program.
I understand that a system can have multiple versions of shared libraries (.so files) differing by the first version number following libxxx.so in the filename, and that different programs may have been linked with different versions of the same library. A new version of a library (.so.# has changed) is generally incompatible with the previous version. (Version numbers after the first are minor library changes not affecting its compatibility).
If I am compiling (or recompiling) an older program which has been linked with an older version of a library, and if I have both the older and newer libraries on my system, there seems to be no mechanism for managing multiple versions of the include files that are associated with each library version. Thus, even though I have the older version of the library available, without the older version of the include files to link to, I really can't recompile that program. Is that true?
If so, support of multiple library versions seems to be of questionable value. The idea must be that the only users of old versions of libraries are programs that were compiled when the old version was current, and that no program should ever be recompiled unless all the versions of all the libraries it links with are the most recent versions of those libraries installed on the system. As soon as a new version of a library is installed, all programs using the older version can no longer be compiled (unless they are updated to newer version using the newer library). Right?
Or, do people commonly go to the trouble of keeping a separate subdirectory of include files for each version of each library that is installed, so programs can be recompiled using the appropriate include file and library versions?
Include-files are handled differently from shared libraries:
With shared libraries, there will be one name for development packages to use when linking, which is a symbolic link to a file which has a specific soname (name plus version). After linking, a program has a reference to that file which is used whenever running the program.
Include-files can be distinguished by the -I option for include-paths. When there are multiple useful versions of a library, some developers may package the versions using different directory names for holding the related header files. That way, changing just the -I option when compiling makes the build scripts able to work with a specific version of the headers. But unlike shared libraries, the header files are used only when building the program which uses a library.

Export libraries which are used in compile & running MSVC++ 2010

I am developing a C++ program which is using 6 different libraries (like Boost, OpenCV, Protobuf etc). I've compiled and installed all required libraries on my development PC. I want to export the program to work in another computer standalone. I can copy all the shared libraries to a folder and put the executable next to it but it will be really useful as my program's size would be enormous. Is it possible that MSVC++ exports all the shared libraries (and maybe the include files too) which are used in compilation?
Edit: Let me make clear the question.
I've libraries & header files all around in my PC (Like C:\boost\lib, D:\Workspace\opencv\lib etc..)
I don't use most of the libraries in these folders in my application.
I want to run my application at another computer.
I don't want to install all the libraries to the new computer.
I want only that MSVC export the required(used) shared libraries to another folder that I specify so I can copy only 1 folder containing only required libraries in it.
Is it possible?

Resources