continuous integration with in house libraries - shared-libraries

We have a project, say coresystem which uses a number of in house dependent libraries, all at different versions.
The build configuration for coresystem is setup to reference libraries with specific version numbers for example coresytem 2.3.4 uses libraries abc-version-1.2.3 and def-version-3.4.5.
These libraries are often changed during at the same time as coresystem. And not necessarily the same set of libraries change with each version of coresystem.
How do we handle continuous delivery in this case. Currently we are constantly having to change the build config of coresystem.

Using variables as parameters that you can pass in to the build system instead of putting them into the config is going to give you what you would like to achieve here. Depending on the build system you are using there are different ways to pass in parameters like this.
For example Gitlab CI/CD uses this:
https://gitlab.com/help/ci/variables/README#custom-environment-variables

Related

Is it possible to use environment variables for rpath?

I have an Unreal Engine project with some plugins which are symlinked, other plugins which are copied and thus I have to read dynamic libraries from several different places. Since this is supposed to work on different Dev machines, the project itself and the Unreal Engine can always be in different locations. Thus it would be nice to use an environment variable for the project and one for the Unreal Engine to use in the rpaths. Is this possible?
Is it possible to use environment variables for rpath?
No.
Write a wrapper script that uses LD_LIBRARY_PATH and also LD_PRELOAD to load the libraries that you need. Take inspiration from steam.
See man ld.so. It has a nice list what happens and what variables are used.

GitVersion – selective versioning multiple assemblies of the same project

I’m on a .net c# project composed by a solution with several class library projects.
The source control is managed by git using gitflow as branching model.
We have decided that we wanted to implement semantic versioning (http://semver.org/) of the project in order to follow a standard way to communicate our releases.
For that we are using GitVersionTask (via NuGet) which works pretty well with gitflow.
Every time we tag a release and we perform a build from the master branch the version of all assemblies are updated and a new release is out for delivery.
Only one of the assemblies has a public API, all the other are for internal consume. I would like to know if this is the correct way to manage the version of multiple assemblies of the same project I mean, isn’t it wrong to change the version of every assembly when only a couple (or even just one) was changed? To get thinks more complicated there is strong possibility that some of the “internal” assemblies will be used by other projects so I believe it not very wise to increment a major version of an assembly that didn’t suffer a change just because another assembly of the same project is promoting breaking changes. Should each assembly project be managed on its own repository?
Thanks in advance.
I know this is a bit of an old question, still:
I want to share a workaround that seems to be working:
GitVersion uses $(Build.SourcesDirectory) to see where the sources are located - src
We can change this using logging commands*
Workaround is to set the Build.SourcesDirectory before GitVersion task
Then gitVersion uses the GitVersion.yml from the project folder (Build.SourceDirectory) and voila - works
After that you might want to roll back the change or not - depending on your need. For me it seems it is nice to scope down to the only nuget package from the collection of nuget packages in our nugetPackages monorepo.
see GitVersion issue and comment
*Example Powershell command:
standard PowerShell task; set to inline script;
Write-Host "##vso[task.setvariable variable=Build_SourcesDirectory;]$(Build.SourcesDirectory)\$(NugetProjectName)"
There is certainly nothing in GitVersion that would help with having separate projects within the same repository. The guidance that we would offer here is that you should use different repositories for the different parts of your application. That way they can be versioned/updated at their own cadence.

Debian Packaging: Different configuration per subpackage

I have a CMake-based project with a static library (by default) where I need to provide deb-packages. I want to make it nice and provide a shared and a static library in different packages.
Then: How can I pass different configuration-options from the debian/rules to the underlying cmake for the lib$packagename and the lib$packagename-dev package? Say, in this example, switch cmake to build a shared library via CMAKE_FLAGS+=-DBUILD_SHARED_LIBS=ON?
I don't find that many examples for the more recent debhelper format (which is 9 in my case). Is it recommended to use an earlier version for this specific requirement?
Thanks and Greetings

Setting a "hard-coded" flag in sources during build process

I am developing a (Groovy) application that I build via Gradle (on a Continuous Integration server). That application should be compiled into two versions: one development build (including some features I only want to enable for myself), and one public build (which would not include or just disable those "development features").
One solution to this would be to have something like a global flag directly in the main class of the application, something like static final boolean PUBLIC_RELEASE. Then within my code I could check for that flag and enable or disable a certain feature.
Now in my Gradle build script I could check for an environment variable (set by the Continuous Integration server). If that variable is set, then I could set (i.e. change) the current value of the flag to either true or false before the sources are being compiled.
I am sure that approach would work. However, it does not feel right to modify the sources themselves during the build process. On the other hand I would assume this is kind of a standard task for many software projects.
Is there any "best practice" to deal with this requirement?
Is can work out three way for handling the scenario - ordered in the way I would do that:
Create a dedicated properties file the is filtered during build and added to the final jar. Application behavior is determined by this file on runtime. Basically this is how such scenario is handled, but such file can be modified in jar directly by the user.
Source code filtering, hint ReplaceTokens. This seems the best way of securing the application, since the behavior is compiled into code directly, but also problematic when it comes to filtering.
Configure the behavior of application by passing system properties -D at runtime. There's a possibility that a lot of such properties should be passed so it might be problematic for the end user and the configuration of the application is explicitly exposed.

RPM - Install time parameters

I have packaged my application into an RPM package, say, myapp.rpm. While installing this application, I would like to receive some inputs from the user (an example for input could be - environment where the app is getting installed - "dev", "qa", "uat", "prod"). Based on the input, the application will install the appropriate files. Is there a way to pass parameters while installing the application?
P.S.: A possible solution could be to create an RPM package for each environment. However, in our scenario, this is not a viable option since we have around 20 environments and we do not wish to have 20 different packages for the same application.
In general, RPM packages should not require user interaction. Time and time again, the RPM folks have stated that it is an explicit design goal of RPM to not have interactive installs. For packages that need some sort of input before first use, you typically ask for this information on first use, our you put it all in config files with macros or something and tell your users that they will have to configure the application before it is usable.
Even passing a parameter of some sort counts as end-user interaction. I think what you want is to have your pre or install scripts auto detect the environment somehow, maybe by having a file somewhere they can examine. I'll also point out that from an RPM user's perspective, having a package named *-qa.rpm is a lot more intuitive than passing some random parameter.
For your exact problem, if you are installing different content, you should create different packages. If you try to do things differently, you're going to end up fighting the RPM system more and more.
It isn't hard to create a build system that can spit out 20+ packages that are all mostly similar. I've done it with a template-ish spec file and some scripts run by make that will create the various spec files and build the RPMs. Without knowing the specifics, it sounds like you might even have a core package that all 20+ environment packages depend on, then the environment specific packages install whatever is specific to their target environment.
You could use the relocate option, e.g.
rpm -i --relocate /env=/uat somepkg.rpm
and have your script look up the variable data from a file located in the "env" directory
I think this is a very valid question, specially as soon as you are moving into the application development realm. There he configuration of the application for different target systems is your daily bread: you need to configure for Development, Integration Test, Acceptance Test, Production etc. I sure don't think building a seperate package for each enviroment is the solution. Basically it should be the same code running in different enviroments.
I know that this requirement is not supported by rpm. But what you can do as a work around is to use a simple config file, that the %pre script knows
to look for. The config file could be a simple shell script that for example sets environment variables, and then the different und pre and post scripts can use those.

Resources