Preferred way to build multiple projects with Jenkins, CMake and pkg-config? - linux

I am developing two libraries A and B with B depending on A, both managed in their own Git repositories. The libraries are built with CMake and installed in standard UNIX directories. During installation a .pc file is also installed that is used by pkg-config. Library B uses pkg-config to find library A, therefore it is necessary that either library A is installed system-wide with make install or the PKG_CONFIG_PATH is set to the appropriate directory.
Now, I use Jenkins to build library A on a remote machine. Unfortunately, library B cannot be built because the dependency is not met (pkg-config cannot find library A). Setting the paths in a pre-build step is not working because the commands are run in its own shell.
The questions are
Can I somehow make install library A? Or,
can I somehow point CMake to /var/lib/jenkins/jobs/libA/install_dir/lib?
Is there a better way to build projects with inter-dependent libraries?

To answer your questions in order:
To make install library A - You can configure the Jenkins job that builds library A to archive the library as a build artefact. Then the job to build library B can download the artefact from Jenkins at the start of the run – e.g. http:///job/libA/lastSuccessfulBuild/artifact/
Once the library B job has collected library A it can then be installed and used.
Configuring Cmake – I don't know enough about cmake so I'm afraid I can't answer that.
Is there a better way – Possibly using Rake, we use it to control a build chain with lot's of dependencies. Although I'm not sure how well it would work if library A has to be built on a remote machine. Things might be simpler to manage if both libraries are build on the same machine.

Using artifacts, as suggested by user1013341, is one of the steps that was needed to this problem. But to get it working with pkg-config we have to do a little bit more:
I setup library A's CMakeLists.txt to produce a tarball with make package_source.
After a successful build of library A, Jenkins create this tarball and stores it as an artifact.
library B uses the Copy Artifact Plugin to get the tarball and untars it. Inside of the tarball there is still built project and the .pc file pointing to the install location of library A.
In the next build step, I use the EnvInject Plugin to set the PKG_CONFIG_PATH and the LD_LIBRARY_PATH to the untarred library A.
Last but not least, the normal CMake build process can be started and the correct paths are picked up according to the environment variables.

Related

Create a ready to use install package on linux with cmake or gcc, including shred dependencies

I have completed a small project that uses several libraries. the CMakeList.txt looks like this:
cmake_minimum_required(VERSION 3.5)
project(tf_ct_log C)
set(CMAKE_C_STANDARD 99)
include_directories(include /usr/local/include/hiredis /usr/include/openssl)
link_directories(/usr/lib/x86_64-linux-gnu)
set(HDR include/ct_logger.h)
add_executable(tf_ct_logger src/main.c src/ct_logger.c ${HDR})
find_package(OpenSSL REQUIRED)
find_package(Threads REQUIRED)
find_library(PostgreSQL REQUIRED)
find_library(jansson REQUIRED)
target_link_libraries(tf_ct_logger OpenSSL::SSL jansson pthread pq)
I would like to be able to build a package that can be installed in another machine, without downloading any dependencies. with ldd, I 've got all dependencies of the application and copied those files (libxyz.so...) into a subdirectory deps in my project. How can I create that package using those dependencies so that the end user will just use the object files of my project along with the dependencies libraries to create the executable?
It gets really hairy real quick when you need to create a native package for multiple flavors of installers (Debian, RH, Arch, etc.), especially if customization is involved.
If you just need a clean reproducible to get it on a box and run -- I would strongly suggest looking towards packaging it as a Docker container.
You start from some lightweight Linux distro container (Alpine is the latest trend), derive it into one with C and C++ runtime and anything else you depend on and call this "my prod container". From that you derive one with C++ compiler and debugger installed and call it "my dev container".
We actually wrote a little memo a while back, while making our open source hobby usable for others.
You will probably still need to clean up your CMake file to an extent that the "install" target works (mine is here).
I may have not expressed myself correctly. I just wanted to compile and build executables from my project, then find a way to copy it on another machine without having to install all dependencies before running the app.
The solution I found (with the help of a more experienced developer) was as follows:
1- Get all dependencies using ldd
2- Copy dependencies in a directory dependencies
3- On the target environment, copy content of dependencies into /usr/local/lib/myapp/
4- On the target environment, goto /etc/ld.so.conf.d/
5- create the file myapp.conf with one line in it: /usr/local/lib/myapp
6- run ldconfig
Then, the executable I created on my development machine runs here smoothly!
Of course all the steps I described up there must be listed in a script for automation

Workflow for compiling and installing software with Puppet

I need to manually compile and install FFMPEG as one of my dependencies using Puppet manifests. I want to do this myself to customize the configuration of FFMPEG to the needs of my project.
I'm not sure as to how to structure the entire process with classes. The logic should go something like this:
If /usr/local/bin/ffmpeg doesn't exist, compile:
Install build dependencies from apt-get.
Create a directory for the library sources /tmp/ffmpeg
Download and compile the Yasm assembler
Download.
Extract.
Configure
Make
Make Install
Download and compile x264
Clone the source.
Configure.
Make
Make Install
Download and compile fdk-aac:
...
....
I can easily branch all of these out into their own modules and declare them as dependencies of FFMPEG, that's not the problem.
My main problem is understanding how to do the whole download/extract/compile process for each module only if it's not already present on the system.
How do I structure my classes to only act if the software is not already installed?
Regardless of how you go about it, you need a way to check whether your custom installation has been installed.
Common methods include
checking a file and running a command only if it is not present: http://docs.puppetlabs.com/references/latest/type.html#exec-attribute-creates
running a command only if another command returns 0: http://docs.puppetlabs.com/references/latest/type.html#exec-attribute-onlyif (alternatively, the unless attribute)

How to to build src from a CygPort?

I have a question about the structure of the source code from a cygport package.
Here is the contents of a Cygports source file:
the actual source bundle for the project (tar.gz, tar.bz2, etc.)
the any number of *.patch files.
a .cygport file
I am trying to build gedit-3.4.2 from cygports repository.
How does the .cygport file help me run the proper options in the ./configure ?
For instance, in gedit if i don't specify --disable-spell it won't proceed due to error. How do I get the list of ./configure options that were used to build the project when the cygport was built?
Is there some way we can use the cygport executable to build the cygport and change the prefix too?
Here is the contents of gedit-3.4.2-1.cygport:
inherit python gnome2
DESCRIPTION="GNOME text editor"
PATCH_URI="3.4.2-cygwin.patch"
DEPEND="gnome-common gtk-doc
girepository(Gtk-3.0)
pkgconfig(enchant)
pkgconfig(gtksourceview-3.0)
pkgconfig(libpeas-gtk-1.0)"
PKG_NAMES="${PN} ${PN}-devel"
PKG_HINTS="setup devel"
gedit_CONTENTS="--exclude=gtk-doc --exclude=libgedit* etc/ usr/bin/ usr/lib/gedit/ ${PYTHON_SITELIB#/} usr/share/"
gedit_devel_CONTENTS="usr/include/ usr/lib/gedit/libgedit* usr/lib/pkgconfig/ usr/share/gtk-doc/"
DIFF_EXCLUDES="*.desktop.in *.schemas.in *-marshal.h"
CYGCONF_ARGS="--libexecdir=/usr/lib --enable-python"
KEEP_LA_FILES="none"
EDIT Someone from Cygwin Ports mailing list said:
"The configure options are
--libexecdir=/usr/lib --enable-python
Which is from CYGCONF_ARGS."
Here is the contents of a Cygports source file:
You'd do better to think of it as a Cygwin package source file.
cygport is simply a tool for automating the creation of Cygwin binary and source packages. It is the primary tool available, but unlike with some other packaging systems, there's really nothing forcing you to use it. It is quite possible to build a Cygwin package entirely by hand, since it is really nothing more than a tarball that Cygwin's setup.exe can blindly unpack into the Cygwin root directory (typically c:\cygwin) with the expectation that this will put the package's files in sensible locations.
Before cygport existed, people did build their own ad hoc packaging systems. Many Cygwin package maintainers still use these tools they created. (Yours truly included; two of my three packages use cygport, but the third still uses a custom build system.)
Ultimately, you want to read the cygport manual, in /usr/share/doc/cygport/manual.html.
(Yes, I know, "RTFM" answers are frowned on here. But, as one who currently maintains two cygport based packages in the official Cygwin package repository, please believe me when I tell you that the manual is still the single best resource available on this topic.)
How does the .cygport file help me run the proper options in the ./configure ?
As you found out through other resources, you'd first need to edit the CYGCONF_ARGS value in the .cygport file.
The simplest possible step after that is cygport gedit-3.4.2-1.cygport all. That attempts to rebuild all the binary packages in a single step. It also builds a new source package containing updated .cygport and patch files.
If something breaks in the all build process, it is usually faster to switch to using the sub-commands contained by all instead of completely restarting the process. The all step just runs prep, compile, install, package, and finish for you, in that order. For instance, if all fails during the compilation step, there's probably no need to repeat the prep step.
(It is exceptionally uncommon for cygport or a sane build system to wreck the build tree, forcing you to re-run prep. Far more commonly, you end up needing to re-do prep when you manually wreck the build tree while trying to get a new package to build for the first time and need to start over.)
For instance, in gedit if i don't specify --disable-spell it won't proceed due to error.
You can probably fix that by installing the libaspell-devel package from the official Cygwin package repository with setup.exe.
Personally, I wouldn't disable any feature unless it meant installing unofficial packages, such as those from the Cygwin Ports project.[*] It is nice to have Cygwin Ports repository, but because it contains so many packages, installing one can end up creating an "install the world" situation: package A depends on packages B, C and D, and C depends on E, F, G, H, and G depends on I, J, K, and... Dependency hierarchies within the Cygwin package repo tend to be flatter and narrower than those in the Cygports repo.
Is there some way we can use the cygport executable to build the cygport and change the prefix too?
You have guessed that you just add --prefix=/my/private/program/tree to CYGCONF_ARGS, I trust.
[*] If you are feeling confused about "Cygwin Ports" and cygport, the naming similarity is no coincidence. cygport is a tool created by Yaakov Selkowitz for himself when creating the Cygwin Ports package repository. Later, it became popular enough among other Cygwin package maintainers that it pushed out most of the competing build systems.

Packaging multiple rpms in one file

Is it possible to paqckage multiple rpms into one file. I have got two bundles one of which requires that the other be installed. I would like to create a single installable out of them in such a way that this installer will first invoke pkg 1 and then install pkg 2. Is this possible? What about deb packages? Sorry if it a basic question. I have not worked with installers on Linux before. I have created Windows installers. There you can create two merge modules (.msm) and package them into a standalone installer (.msi) and specify the order of execution. I am looking for similar functionality on Linux.
EDIT: I think the question was not clear enough. Let me try to rephrase it. I have a bunch of runtime libraries which is currently shipped as a standalone installer. Another team develops products which use these libraries at runtime. I now want to provide the libraries to the product team in some form (sub-package) which they can include with their installer and configure their installer to install my sub-package first. Both packages should be available locally to the end user. They are not available on any repository and cannot be pulled down from the net at install time.
Try packaging them into a self extraction bash script. You won't have to modify or aggregate the libraries and rpms together. This should yield an executable file that can be included in another installation process. The last step of the extraction should be to call "rpm -i" with the rpm files as arguments.
Reference on building a self extracting bash script:
http://www.linuxjournal.com/node/1005818
First : you don't have to repackage anything, especially not if your application uses some external libraries. You just have to mention in your recipe that your RPM (or DEB) depends on the other one. Both apt-get (for deb) and yum (for rpm) will check these dependencies and install them if needed.
See :
http://rpm5.org/docs/api/dependencies.html (rpm) and http://www.linuxfordevices.com/c/a/Linux-For-Devices-Articles/How-to-make-deb-packages/ (deb)
(These were just the first ones I found, you can find better resources out there :p).

Building a Win32 DLL from a Linux library source

I'm trying to build a Win32 DLL from an audio-DSP related Linux library (http://breakfastquay.com/rubberband/). There are makefiles and config scripts for Linux, but no help for Windows. The author provides a Win32 binary of a sample app using the library, and I see a number of "#ifdef MSVC" and "#ifdef WIN32" scattered around, so I don't think I'm starting completely from scratch but I'm stuck nevertheless.
As my programming knowledge in either platform is rather limited, I'd appreciate any help.
First of all, what is the right way to get started here? Visual Studio? Cygwin? Initially I started off creating a Win32 DLL project in Visual Studio, adding the source files, thinking about adding a .def file, etc, but at some point I felt like this was going nowhere.
As for Cygwin, this was the first time using it, and I don't even know if this is the sort of thing that Cygwin is designed for. Is it?
On Cygwin, I ran ./configure and got stuck at something like this:
"checking for SRC... configure: error: Package requirements (samplerate) were not met: No package 'samplerate' found"
After looking through the log, it appears that pkg-config is looking for samplerate.pc. How do I handle packages in Windows? libsamplerate is just an open source library, and I have source and a DLL for this. But I'm not sure how to use them to satisfy the dependency requirements for librubberband (which is what I'm trying to build)
I'm completely lost at this point and if anyone can give me a nudge in the right direction... and, is there an easier way to do this?
Many thanks in advance.
If you're still stuck on this I can throw a little light.
You may have to build everything from sources (or have the libraries installed in your environment). You're using Cygwin, I would recommend MinGW and MSYS too, but sometimes it's just not possible to use this combination to build the program or library.
So if using Cygwin, first ensure that you have a proper environment installed. This is that you have the correct development headers installed.
Then download libsndfile. Extract the sources to a directory and from the Cygwin bash shell navigate to that directory. There perform:
./configure
make
make install prefix=/cygdrive/c/cygwin
Notice that I use a prefix, that prefix should point to the directory Cygwin is installed in order to correctly install the libraries (the same happens to MinGW and MSYS, the prefix should point to the MinGW installation directory). Maybe using the usr directory in the prefix works too, I've never tried it.
Now download FFTW, as it will be needed for libsamplerate and rubberband. Same procedure as with libsndfile: extract, configure, make & make install using the prefix. Now copy the header files of FFTW (in the example they'd be in /cygdrive/c/cygwin/include) to the include directory in the usr directory (in the example /cygdrive/c/cygwin/usr/include).
Next SRC (libsamplerate), same procedure.
Then the Vamp plugin SDK. In order to compile the it you may need to edit the file src\vamp-hostsdk\PluginLoader.cpp, deleting RTLD_LOCAL from a dlopen() call (it's safe, it's already the default behaviour).
Also, you may need to install it by hand (in my experiences it didn't like the prefix). Or set the environmental variable PKG_CONFIG_PATH pointing to the paths of pkgconfig, e.g.:
set PKG_CONFIG_PATH=/cygdrive/c/cygwin/lib/pkgconfig:/usr/local/lib/pkgconfig
Now, create a file called ladspa.h in the include directory with the contents of the LADSPA header
Finally, configure and build rubberband, it should find everything it needs.
To build in MSYS using MinGW follow the same procedure, using the according prefix. Using Visual Studio is another alternative, but you may need to use some of the pre-built libraries (for example for libsndfile) as building Linux libraries natively in Windows may be complicated or even impossible (without hacking the source code) in VS.
Anyway, the autor of rubberband provides binaries; I think you should consider use them instead of going through all of this.
Linux to w32 is mostly a tricky thing.
For each of your dependencies, download the source and:
./configure
make
sudo make install
Also, I recommend you to use MinGW + msys in place of CygWin (as the latter produces executables that depend on its libraries). However in your situtation, use the VS approach -- 't will save you a lot of time.

Resources