Eigen 3 - Backwards compatibility - backwards-compatibility

Currently I need to resort in a sparse solver for a project. However I use an old version of Eigen3 on Ubuntu 12.04 (during the thesis I avoid unnecessary updates/upgrades), which means that all the information that I find online cannot be used at the moment because of my outdated version, while the few unsupported tools of my version are very hard to use (weird compilation errors - e.g. with unsupported/Eigen/SparseExtra)
I think that I should upgrade to the last stable version, however it is very critical that I will be able to replicate the numbers of all the experiments that I got with the current outdated version. Is Eigen safe when it comes to backwards compatibility?
Eigen is also a dependency for PCL that I'm using, so I'm not sure if this complicates things. Everything is installed with apt-get. Linking to a new version of Eigen locally for experimentation is not possible, because PCL complains and expects to find Eigen installed globally (i.e. in /usr/local/include).

Eigen is source (API) and binary (ABI) backward compatible (of course, except for unsupported/*). However, the numerical results might be slightly different due to different rounding errors, but that's already the case when, e.g., enabling/disabling SSE or OpenMP.
Since Eigen is header only, it is very easy to try the newest version.

Related

SDL2 Backward Compatibility Guarantees?

SDL2 is often described as breaking backwards compatibility with SDL 1.2.
This implies that within different versions of SDL2, the API and ABI remain backwards-compatible.
However, I have not been able to find any authoritative source confirming that this is the case.
For example, for GLIBC, Red Hat maintains a webpage which states:
One of the GNU C Library's (glibc's) unwritten rules is that a program built against an old version of glibc will continue to work against newer versions of glibc.
This guarantee is very useful for portability, as it means that a program can be compiled against an older version of GLIBC and run on any platform that ships at least that version or any newer version of GLIBC.
The closest to such a guarantee that I have been able to find for SDL2 are the release notes for PySDL2, a separate project, which make passing references to "backwards compatibility":
Improved compatibility with older SDL2 releases […]
[…] properly wrapped now to retain backwards compatibility with previous SDL2 releases
[…] provide backwards compatibility for previous SDL2 releases […]
There are also two issues on Github which make passing mentions to "backwards compatibility" in the context of using SDL2, but also aren't actually directly tied to the SDL library at all.
Is there any official or authoritative source documenting or guaranteeing backwards compatibility across different versions of SDL2?
I.E., If I compile a program to dynamically link against an older version of SDL2, is it safe to assume that it will work on platforms that provide a newer version of SDL2?
Official Statements
It appears that one of the first goals reached in the development of SDL2 was to stabilize the ABI, explicitly so that no breaking changes would happen to it until SDL 3, according to the original author and main developer of SDL:
slouken
Regular
Mar '13
As of tonight, SDL 2.0 is ABI locked!
This means that no breaking changes will happen to the API until SDL
3.
Cheers!
From the SDL wiki:
We are obsessive about SDL2 having a backwards-compatible ABI. Whether you build your game using the Steam Runtime SDK or just about any old copy of SDL2, it should work with the one that ships with Steam.
A similar process is planned for the development of SDL 3:
slouken commented 22 days ago (4 Oct 2022, 18:45 GMT)
Our current plan is to make all the ABI breaking changes immediately before the very first SDL3 release, so we have a stable ABI from the very beginning.
Empirically
If we go on abi-laboratory.pro, we may see that SDL2 has more or less kept its promise of perfect ABI compatibility throughout every release:
Source
Specific changes between each version can furthermore be reported by clicking on the percentages.

How to introduce incompatible changes while remaining in major version zero?

I have a large personal software library that I have been working on and is currently working on. Currently, its version is 0.1.0.
It is not mature enough to have a major version of 1. I keep modifying the code and introducing incompatible changes that would merit an increase of the major version number. At the same time, some of my other libraries depend on this library and refer to it by the version number.
If I introduce incompatible changes and don't want to increase the major version from 0 to 1, how should I increment my version number?
The SemVer website is not very clear on that, it just says:
Major version zero (0.y.z) is for initial development. Anything may
change at any time. The public API should not be considered stable.
Does "anything may change at any time mean" that an exception is made for a major version of 0 and that I can change the y and z numbers however I like?
For instance, if my version is 0.1.0 and I introduce an incompatible change, could the new version with that change be 0.2.0?
What others say
On this site it says:
In fact, the SemVer spec defines that anything starting with “0.”
doesn’t have to apply any of the SemVer rules.
Another site also seems to suggest that it is OK to increase the minor version when the major version is 0 and incompatible changes are added:
So you just continue through the 0.x.y range, incrementing y for every
backwards-compatible change, and x for every incompatible change.
It's up to you because
If other libraries depend on your software it means that your software has some consumed public APIs and if it has them... Why isn't already at 1.x.x version?
After all... why is so important that your software reaches the 1.0.0 version only once it's stable? It could start with 3.0.0 or 4.0.0 once it reaches a stable version...
Your software isn't mentally decoupled from your bigger project because, in fact, you'll consider it "mature" only when the whole software (made of a lot of smaller libraries) reaches a "mature" version. But from a technical perspective it's already decoupled 😉
It's right that starting from 0 you don't have to strictly adhere with the semver rules
Everything revolves around what is considered "mature". You told that your software isn't mature but what does it mean? That could be improved? That it doesn't cover all the corner cases? That it's not 100% tested?
In the end: if you don't consider it mature continue with the 0.x.y versioning and increase the minor version but your immature software is already consumed by other libraries so it should now reach the 2.0.0 version 😉

How do I disable version parsing in cabal or stack?

I am using alternative version numbering approach for my projects. I have encountered strange behavior by cabal and stack that does not allow me to fully enjoy benefits of this approach. Both cabal and stack enforce version to be of format Int.Int.Int, which does not cover the case of another version format I use for branches (0.x.x, 1.x.x, 1.0.x, etc).
If I have line version: 0.x.x in my .cabal file, I am getting Parse of field 'version' failed. error when running cabal build or Unable to parse cabal file {PROJECT_NAME}.cabal: NoParse "version" 5 when running stack init.
Is there a way to disable version parsing on cabal and stack commands? Is there a flag for it? Or do I have to request this kind of change (adding flags, disabling version parsing) from the developers of cabal and stack?
Why is there any parsing at all? How does it help with building a package? Does cabal or stack automatically increment build numbers on some event? If yes, where could I read more about this? How could I influence the way version numbering incrementation gets implemented in cabal and stack? I want developers of haskell packages take into account the possibility of alternative version numbering approaches.
PS. For all interested folks, I want to quickly summarize the idea behind "weird" version numbers, such as 0.x.x, 1.x.x, 1.0.x. I use the version numbers with x's to describe streamlines of development that allow code changes while such version numbers as 1.0.0, 1.1.0, 2.35.46 are used to describe frozen states of development (to be precise, they are used for released versions of software). Note that such version numbers as 0.x.0, 1.x.15, 2.x.23 are also possible (used for snapshots/builds of software) and they mean that codebase has been inherited from branches with version numbers 0.x.x, 1.x.x and 2.x.x correspondingly.
Why do I need such version numbers as 0.x.x, 1.x.x and 2.x.x at all? In brief, different number of x's mean branches of different types. For example, version number pattern N.x.x is used for support branches, while pattern N.M.x is used for release branches. Idea behind support branches is that they get created due to incompatibility of the corresponding codebases. Release branches get created due to feature freeze in corresponding codebase. For example, branches 1.0.x, 1.1.x, 1.2.x, ... get created as a result of feature freezes (or releases) in branch 1.x.x.
I know this is all confusing, but I worked hard to establish this version numbering approach and I continue working on awareness about the inconsistencies of version numbering through my presentations and other projects. This all makes sense once you think more about the pitfalls of semver approach (you can find detailed slideshare presentation on the matter following the link). But I do not want to defend it for now. For the time being, I just want cabal and stack to stop enforcing their, as I perceive them, unjustified rules to my project. Hope you can help me with that.
You can't. The version will be parsed to Version, which is:
data Version = PV0 {-# UNPACK #-} !Word64
| PV1 !Int [Int]
Stack uses Cabal as a library but has its own Version type:
newtype Version =
Version {unVersion :: Vector Word}
deriving (Eq,Ord,Typeable,Data,Generic,Store,NFData)
Neither cabal nor stack have a way to customize the parsing. You have to write your own variant of those programs if you want to use another version type. But then again, you're not winning anything at that point: neither Hackage nor Stackage will recognize your package's version.
So the 1.x.x isn't possible at the moment. You could exchange x with 99999999 or something similar to mitigate the problem. That being said, it's not clear what cabal install should then install. The 99999999 version? Or the latest stable variant?
If you can express the semantics, a discussion on the mailing list as well as a feature request might change the behaviour in the (far away) future, but for now, you either have to patch the programs yourself or use another numbering scheme.
Is there a way to disable version parsing on cabal and stack commands? Is there a flag for it?
No.
Or do I have to request this kind of change (adding flags, disabling version parsing) from the developers of cabal and stack?
You can of course ask, but there are so many outstanding issues that you are unlikely to get any traction. You will have to be very convincing -- convincing enough to overturn more than 20 years of experience that says the current versioning scheme is basically workable. Realistically, if you want this to happen you'll probably have to maintain a fork of these tools yourself, and provide an alternative place to host packages using this scheme.
Why is there any parsing at all? How does it help with building a package?
Packages specify dependencies, and for each dependency, specify what version ranges they work with. The build tools then use a constraint solver to choose a coherent set of package/version pairs to satisfy all the (transitive) dependencies. To do this, they must at a minimum be able to check whether a given version is in a given range -- which requires parsing the version number at least a little bit.
Does cabal or stack automatically increment build numbers on some event? If yes, where could I read more about this?
There is nothing automatic. But you should take a look at the Package Version Policy, which serves as a social contract between package maintainers. It lets one package maintainer say, "I am using bytestring version 0.10.0.1 and it seems to work. I'm being careful about qualifying all my bytestring imports; therefore I can specify a range like >=0.10 && <0.11 and be sure that things will just work, while giving the bytestring maintainer the ability to push security and efficiency updates to my users." without having to pore through the full documentation of bytestring and hope its maintainer had written about what his version numbers mean.
How could I influence the way version numbering incrementation gets implemented in cabal and stack?
As with your previous question about changing the way the community does things, I think modifications to the Package Versioning Policy are going to be quite difficult, especially changes as radical as you seem to be proposing here. The more radical the change, the more carefully motivated it will have to be to gain traction.
I honestly don't know what a reasonable place to take such motivation and discussion would be; perhaps the haskell-cafe mailing list or similar.

When should I update so version?

I follow a version scheme for a library with a version number of three parts and a so version of two parts. example-1.0.0 and libexample.so.1.0.
The last number in the version string is updated when I make changes without breaking the ABI. The second number is updated when I add new symbols and the major version number is used for incompatible changes.
The so version is updated when symbols are added even if it does not break compatibility with other programs. This means that programs need to be recompiled because the so version has changed even if the library still is ABI compatible with older versions.
Should I avoid updating the so version when I add new symbols?
This means that programs need to be recompiled because the so version has changed even if the library still is ABI compatible with older versions.
That means you are not doing it correctly. You should only change SONAME when doing ABI-incompatible change. It is customary to use example.1 as the SONAME. Documentation.
P.S. If you only care about Linux, you likely should stop doing external versioning altogether, and instead use symbol versioning to provide a single libexample.so.1 that provides multiple ABI-incompatible symbols to both old and new client binaries.

Determining binary compatibility under linux

What is the best way to determine a pre-compiled binary's dependencies (specifically in regards to glibc and libstdc++ symbols & versions) and then ensure that a target system has these installed?
I have a limitation in that I cannot provide source code to compile on each machine (employer restriction) so the defacto response of "compile on each machine to ensure compatibility" is not suitable. I also don't wish to provide statically compiled binaries -> seems very much a case of using a hammer to open an egg.
I have considered a number of approaches which loosely center around determining the symbols/libraries my executable/library requires through use of commands such as
ldd -v </path/executable>
or
objdump -x </path/executable> | grep UND
and then somehow running a command on the target system to check if such symbols, libraries and versions are provided (not entirely certain how I do this step?).
This would then be followed by some pattern or symbol matching to ensure the correct versions, or greater, are present.
That said, I feel like this will already have been largely done for me and I'm suffering from ... "a knowledge gap ?" of how it is currently implemented.
Any thoughts/suggestions on how to proceed?
I should add that this is for the purposes of installing my software on a wide variety of linux distributions - in particular customised clusters - which may not obey distribution guidelines or standardised packaging methods. The objective being a seamless install.
I wish to accomplish binary compatibility at install time, not at a subsequent runtime, which may occur by a user with insufficient privileges to install dependencies.
Also, as I don't have source code access to all the third party libraries I use and install (specialised maths/engineering libraries) then an in-code solution does not work so well. I suppose I could write a binary that tests whether certain symbols (&versions) are present, but this binary itself would have compatibility issues to run.
I think my solution has to be to compile against older libraries (as mentioned) and install this as well as using the LSB checker (looks promising).
GNU libraries (glibc and libstdc++) support a mechanism called symbol versioning. First of all, these libraries export special symbols used by dynamic linker to resolve the appropriate symbol version (CXXABI_* and GLIBCXX_* in libstdc++, GLIBC_* in glibc). A simple script to the tune of:
nm -D libc.so.6 | grep " A "
will return a list of version symbols which can then be further shell processed to establish the maximum supported libc interface version (same works for libstdc++). From a C code, one has the option to do the same using dlvsym() (first dlopen() the library, then check whether certain minimal version of the symbols you need can be looked up using dlvsym()).
Other options for obtaining a glibc version in runtime include gnu_get_libc_version() and confstr() library calls.
However, the proper use of versioning interface is to write code which explicitly links to a specific glibc/libstdc++ library version. For example, code linking to GLIBC_2.10 interface version is expected to work with any glibc version newer than 2.10 (all versions up to 2.18 and beyond). While it is possible to enable versioning on a per symbol basis (using a ".symver" assembler/linker directive) the more reasonable approach is to set up a chroot environment using older (minimal supported) version of the toolchain and compile the project against it (it will seamlessly run with whatever newer version encountered).
Use Linux Application Checker tool ([1], [2], [3]) to check binary compatibility of your application with various Linux distributions. You can also check compatibility with your custom distribution by this tool.

Resources