Why do various Linux distros use different package managers? - linux

Why do Linux distros have different package managers?
I find this very strange because other software such as text editors, desktop environments and graphics software (Inkscape, Blender, GIMP) are shared among distributions. Why not have a common setup tool?

Like most things in the UNIX world where there are duplications of behaviour, it's a combination of many things, but mostly history, politics/religion, and the desire to build a better mousetrap / NiH syndrome. The existence of multiple system compononents that perform equivalent tasks is often praised as a useful feature, by open source proponents.
Broadly speaking you mostly need to worry about four flavours of package management system. You have the debian derived systems, like debian, ubuntu which use .deb and the apt/dpkg family of management tools. You have the redhat derived systems which use the .rpm format and the rpm / yum family of management tools. Feature wise these both are broadly equivalent, in my opinion.
The important thing is try and learn the toolset you're working with well, they're all well documented. Learn about how to check dependencies and verify package signatures and integrity, and find out what services a package provides, and conversely what package is responsible for a particular installed file or program, using the native package tools for your distribution of choice. Ideally learn the command line options to do this for yum and rpm and then dpkg and aptitude and you'll have most bases covered. Then use the GUI tools if you prefer.
I think the most important thing to remember is that it's generally a mistake to mix packages from different distributions or releases in the same system even if they use the same package format e.g. do not install debian .deb files on your ubuntu system, or SuSE rpm files on your Fedora system, unless you really understand what you're doing.
The other two flavours I mentioned are less mainstream, but I list them for completeness sake. These are
a) no package system outside of
binary/source tarball a la classic
slackware , and
b) source build tools modelled after BSD ports, a la classic gentoo.
Again, you don't want to be here, until you understand why you might want to, in my opinion.

Historical reasons. Similarly, you could ask why there are multiple companies providing similar services, when just one company could be more efficient overall.
See http://kitenet.net/~joey/pkg-comp/ for a comparison of different package formats from the viewpoint of a Debian developer. Also note that you can use a program called alien to install a package of one kind on other kinds of systems. It's not perfect, but it helps when a vendor delivers software in the "wrong" package format for your chosen distro.

Historic Inertia.

Fedora uses both APT and YUM now, they have a little blurb about it on their Wiki, when they started making Fedora they chose YUM because APT hadn't had any updates for a while, and they support APT now, but default to YUM because that is what the Anaconda installer uses.

Some do share a package manager. I've used Apt on several distributions. Some distros need something more specific to their philosophy. For example, Gentoo needs something that grabs source and compiles rather than just installing a binary.

In some cases it's simply that the makers of the distro prefer one package management system to another. The nice thing about Linux is choice and multiple package management systems mean more choice.

There are also many different text-editors, desktop-environments and so on. The different distributions share this only, because they provide all these programs.
But they have to decide for one package-manager. Different package-manager wouldn't know about software installed by another package-manager. So distributions decide for one or develop one themself, tailored to their specific needs.
Two very common package managers are RPM and apt, that are both used by different distributions.

Related

CMake/CPack: Preferred package generators for different platforms

I want to distribute executables and libraries of a C/C++ project on Linux, OSX and Windows. What are the preferred CPack generators, i.e. which are likely to be available for most users? On Windows there only seems to be NSIS, but on Linux and OSX there are several alternatives.
By the way, a source distribution is generated as well, so in theory, users of all platforms should be able to compile the code themselves, but we want to provide precompiled binaries for convenience.
There are multiple common practices on each of the different platforms. Which one is best for you will depend on a variety of factors, but the following should at least help choose among the more popular formats that CMake/CPack has direct support for. I'm assuming you are using CPack via CMake (i.e. via the CPack module, possibly with package components using the CPackComponent module as well).
Windows:
The NSIS package generator produces executable installers which average users are well accustomed to using. These support component-based installs, so you could provide the source as an optional component. CMake's support for this package generator is fairly mature, but it is perhaps becoming a less preferred method in recent times.
The WIX package generator produces MSI installers. Support for this is newer and seems to be more active in terms of feature development, etc. It also supports component-based installs and seems to be becoming the preferred format over NSIS.
Mac
There are a number of options to choose from for Mac, but which one is most appropriate depends on what you want to package up. If you just want to provide a single app bundle, the DMG package generator (also sometimes referred to as the DragNDrop generator) is probably what you want. Users are well acquainted with these and they are easy to use. Avoid the Bundle generator, it is older and more limited in what it supports, the DMG generator should be preferred instead.
For packages containing more than a single bundle, the DMG generator is still potentially suitable, but a proper installer may be more appropriate. Until recent years, the PackageMaker generator was the go-to generator for that, but it has been superseded by the ProductBuild generator (supported by CMake since version 3.7).
Linux
On RedHat-based systems, RPM is usually the package format of choice (use the RPM generator), whereas for Debian-based systems the DEB format is preferred (use the DEB generator). Debian-based systems can support RPM using tools like alien, but users almost always prefer a native DEB format. If you're happy to provide both, you can keep both camps happy, but note that you will have to pay careful attention to binary compatibility. Simple packages used to be able to build against the LSB (Linux Standards Base) to produce a single RPM that would work on all major Linux distributions (even Debian-based ones), but the LSB hasn't really kept up with recent developments and it never really supported the full set of functionality most complex apps needed (or the versions of packages they provided were too old). The LSB does, however, provide very useful tools like the app checker for assessing whether packages you've built (by any means) will be missing symbols, etc. across various Linux distributions.
Note that for Linux, you should distinguish between whether you are targeting packaging for inclusion in Linux distributions themselves or whether you expect users to download and install the packages outside of the distribution's packaging system. Larger independent commercial software products will tend to be distributed as standalone packages, including the relevant libraries, etc. and installing under /opt by default (if they follow guidelines like those advocated by the LSB and Filesystem Hierarchy Standard - FHS (PDF)). Ideally, you would make your packages relocatable so that distribution maintainers have an easier time adapting your packaging method to their distribution's requirements.
RPM and DEB both support source packages to some degree.
Cross-platform
The IFW package generator is favoured by some as a way to produce installers which have a similar look and feel across all platforms. It is also quite progressive in offering support for features like downloadable components. If having an easy-to-use graphical installer across all platforms is of interest, then this one is probably what you're looking for.
The Archive package generator provides support for archives like ZIP, tarballs, 7z and more. These are very basic formats that simply lump your files together into a single archive. These don't have the useful features like desktop integration, pre-/post-install and uninstall, but they are handy as a second alternative package format to one of the above. In particular, they can be useful for users who don't have admin access on their systems and want to simply unpack to a convenient location.

Universal installers on Linux

I think its a fairly common problem, but I need to have community opinion and so I am posting this question.
Use case: I am trying to create a single package (32bit) for all the linux distros (32bit, 64bit) that I want to support.
Problem: The INSTALLER
Needs to be able to run pre/post install scripts.
Should be able to run on both 32bit and 64 bit machines
Should be able to support older and newer distros (Centos 6 and above)
Should have an online repository for updating packages.
Should be able to run without X server
Should not have any dependency on a software that cannot be installed using standard yum/zypper/apt commands. Should not depend on any non standard repository.
I came across this link:
https://www.reddit.com/r/linux/comments/4ohvur/nix_vs_snap_vs_flatpak_what_are_the_differences/
It lists many alternatives, but none of them seems to satisfy all the above requirements. (Or have I overlooked something)
In addition I looked at the following two alternatives:
QT installer Fwk (needs X to run, if I am right)
self-extracting scripts with tars bundled.
The only solution that fits all the needs is "self-extracting scripts with tars bundled". But it requires a lot of work, effectively managing all the installation/upgrade stuff myself. Before I go ahead with this alternative, can anyone please confirm if he/she has any success in creating a single package for many distros?
I don't believe in the concept of rolling one's own installer with a self-extracting archive overly much. Every distro is different, and should be addressed using their own installation mechanisms. Also, writing your own installer is re-inventing the wheel.
I'd advertise using the packaging methods of all the distros you're targetting. Essentially, one SPEC file is typically enough to support CentOS 6,7, and all modern Fedora versions. Use mock or the copr service to generate all the binary packages for the distros you're targeting; then a debian rules file should be enough to generate Debian, Ubuntu and Mint packages. Add a pacman script if you want to support Arch Linux, too (it's pretty easy).
Admittedly, this way, you end up with a whole bunch of different packages, and not one. However, now you have an installer for each system that actually fits that system, is linked against the libraries available on that distro, and thus, you don't have to include all the dependencies like in a flatpack etc.
Installation from Distribution-specific packages is almost always "smoother" than installation through some self-extracting archive that wasn't actually designed for my specific distro version, so this is probably a big plus for your users. Also, having packages usually makes it very easy and stable to offer an update path, should you decide your software needs patches later on.

Deploying C++ game in Linux

I am indie game developer working on Windows platform, but I have actually little to none experience with Linux and deploying apps for it. I am polishing my game written in C++'11 based on SDL 2.0 with several other cross-platform dependencies (like AngelScript or PugiXML) on Windows and I want to distribute it over Linux too and have a few question about that. The game is commercial, closed source which is currently on Steam's GreenLite, but I want to distribute free alpha version downloadable from my website regardless of GreenLite status.
1.) Are the main Linux distributions ABI (application binary interface) compatible? Or do I need to compile my game on every supported distribution/platform?
2.) If so, which distributions/platforms are reasonable choices to support?
3.) What is the best way to install an app and it's dependencies on Linux? I've read about deb and rpm systems, but it's still confusing - is there any way to automatically generate setup packages for various distributions?
4.) How does Steam work with Linux? How should I prepare my app for distribution via it?
Excuse me if I ask wrong questions, the whole world of Linux is pretty new to me and I got lost reading various articles and manual pages...
This depends on what the distribution is derived from. Generally, there's no need to recompile a program on something like Ubuntu under Fedora so long as the code remains unchanged. Since Ubuntu and Fedora would have to be using the same libraries (albeit in different locations perhaps) and anything OpenGL-related would be a driver issue; therefore it is hardly a requirement to recompile your software. I would be extremely surprised if you had to recompile your software since all distros use pretty much the same set of libraries/got bash/use the Linux kernel but have different package managers. The last part is where it gets complex:
The aforesaid distributions have different package managers which requires you to repackage your software accordingly. You could release pre-compiled binaries under a tar.gz file and simply have distro maintainers package the software for you; though if you want to control how your software is distributed then you should better do that yourself. Because of this issue surrounding the many package managers out there, people still resort to recompiling source code through a make file which can be generated through cmake. This only happens if certain dependencies are, for whatever reason, 'renamed' in which case because of a simple name change the program magically doesn't find the dependency. Since there's also no naming convention it even makes life harder. The most important virtue here is Trust, and Trust therein to have developers follow naming conventions so everyone can reference the same package of the same name.
The best distros to support are the most popular ones: Ubuntu and openSUSE would be great starting points. Linux Mint, Fedora and Red Hat and Debian use package managers likewise of the aforesaid.
Meanwhile, you should know that you can't statically link GPL'd code in your software without also making your software also GPL. A way to work round this is to resolve dependencies by either: A. Including the relevant dependencies in the same folder as the executable (much like *.dlls on Windows) or to depend on the system upon which your program is run to look inside the same directories your program looked into while compiling and linking. The latter is actually riskier since it bears the assumption the user will have the libraries and unmodified. The former will make your overall software use less storage, but including the dependencies would only mean increasing the size of your package but ensures consistency across all systems.
As for installation, you would need a bash executable that moves the contents of your directory to the right locations. Generally, the main app goes into /usr/bin and any app-related data goes into the home folder. This again depends on the distro; look for a .local directory, or you could create a directory dedicated to your app that is hidden. Hidden folders are prefixed with a period. Why put this stuff in the home folder, and what? The why: because the home folder gives read-and-write permissions to everyone by default. The what: anything that needs to be run with the app without the user having to authorise it. Dependencies should thus be located in the home folder, preferably under your own directory. Different conventions follow; some may disagree with me on this one.
You might also want to use steam's API which does most of this work for you. Under Steam, your app may be under steam's own directory and thus functions as a steam APP with all the functionality therein.
http://www.steampowered.com/steamworks/
To find out more about how to get your app on steam. I have to say I was really impressed, and they even include code samples. The best part is that this API is on Linux as well. I don't know much apart from that Steam would be handling the execution of your app through its own layer. Wherein, there's no need to independently distribute your app the previous steps.
Note that you can also distribute your software through the Ubuntu Software Centre if you are interested.
http://developer.ubuntu.com/apps/
Though, Ubuntu has more focus on getting apps running regardless of platform.
Know that Linux has no single convention, but its convention is simply derived from pragmatism, not by theory. At the end of the day, how you want your software run on Linux is up to you.
I'm not a game developer and I come from open source community so can't really advise on delivering binaries. I'll try to answer some of your questions though:
Valve has a steam runtime you can target on linux (https://github.com/ValveSoftware/steam-runtime) - this would be the best way to port your game. I saw it mentioned in one of their Linux dev videos on youtube my understanding is it bundle a bunch of libraries inclding SDL and it is setup to emulate a specific version of ubuntu. So if you write your game against the steam runtime it will run in any linux distro that steam has been ported too.
As for natively packaging your game one thing to consider is if you package it as a deb or RPM and instruct it to depend on distro provided libraies than your app may break when the libraries are updated (some distros update libs quite often - others are more stable). Dynamically linking against system libraries works well for open source since people can patch the code when libraies change not ideal for close sourced stuff.
You can statically link your binary at build time, which means you have a larger sized binary. But than you don't have to worry about app breaking when libs are updated.
Some programs like Chrome bundle their own libs (which are essentially forks of the system libs) again this makes download size much larger but also has potential to cause security problems, people tend to frown on this. (see: http://lwn.net/Articles/378865/)
1.) No, ABIs of main Linux distributions are not fully compatible. In a narrow sense, they are mostly compatible. But in a broader sense, there are many differences in which some elements of the system work, are configured, etc.., OTOH, making "packages" is not a big problem per se, just some work. Also, what works for a "major" distribution (like Debian) works (pretty much always) on all derivatives (like Ubuntu, Mint...).
2.) A good start list to support is: Debian (.deb), RedHat and Fedora (.rpm both). Their package formats and tools are mature and well known. This will "cover" a lot of derivatives.
3.) There are some "cross-distribution" package builders, but are mostly not up for the task. Writing a definition script for most of package formats is not hard, once you get a hang of it. Also, some commercial tools have "Installation script generators" for Linux which take care of things for you (they don't generate .deb or .rpm, rather a complex shell script, similar to a Windows EXE installer)
4.) Sorry, I don't know much about Steam. O:)
So, from my experience, the best way to do it is to accept the ugly truth that you have to select a few major distributions and their versions ('cause things can change very much between versions) and make packages for them, and test often on all of them. And be happy that you're not developing some long-lived kernel module/driver, because if you add tracking kernel API changes to the whole picture... :)

Linux distro/version to support when releasing a software on Linux

We are about to release a couple of softwares with Linux support.
As for Mac and Windows, the number of version to support is quite limited (xp, 2000, vista, 7 for win, 10.4-6 for Mac). But for linux it's another story.
We'd like to support as many Linux as possible, but the choice is large.
The questions are:
Which distribution format (binaries) to use to support as many Linux as possible?
For testing, what "base linux" can we test on and extend our results to other linuxes.
According we provide statically linked binary with all the dependencies, what do we need to check? I assume kernel version and libc version, but I'm wondering.
Our software is written in ANSI compliant C with a bit of BSD and POSIX (gettimeofday, pthreads).
So you think three versions each for Mac and Windows is normal, but you shy away from Linux? Hm.
Just make sure it builds using the standard tool chains -- configure, make and make install traditionally. The rest should take care of itself.
Else, pick what you are comfortable with. For me that would be Debian/Ubuntu, others prefer Fedora. Look at the Linux Standards Base and things like FreeDesktop.org for other standards. Kernel and libc should not matter unless you are doing something very hardware or driver-specific.
The kernel strives to maintain a backwards-compatible binary API. Statically linked binaries built against 1.0 series kernels are supposed to still run fine to this day on the latest 2.6 series kernels.
If you are statically linking with everything (including libc), then the major problem you are likely to face is different filesystem arrangements, which may not even be a great issue for you. (Testing is the only way to find out, though).
An idea is to survey your proposed customer base so see which linux version they run and make a short list from their feedback. However from what I know (which is subjective!) ...
I would suggest running two different distribution types -- rpm and .tar.gz. With rpm you cater for the latest Fedora/openSUSE/RHEL/SLES (and derived distros, which is a fair chunk of the corporate market). You are already handing a lot of dependency problem by static linking, so kernel version should be sufficient.
With .tar.gz distribution you cater for 'all others' but watch support and configuration problems as they quickly become a time sink.
For testing, have virtual machines of each version you choose to support. These can also be used for product support (I assume you will need to provide product support??) I wouldn't try to extrapolate results between linux versions because there a too many hidden 'gotchas'.
You can release statically compiled Linux binaries against the kernel & version of glibc. You really only need worry about compatibility-breaking revisions. If you have some time, you can setup everything to cross-compile on the same host. The kernel is backward compatible. glibc is more temperamental.
File paths can be assumed to be Linux Standard Base, if you want to package it with an installer. The more flexible you can be here, the better. I've never heard a customer complain about receiving a tarball of binaries, which I'd recommend offering. I have had customers complain about incorrect assumptions.
Your best bet for a formal package format is probably between DEB (Debian Linux & derivatives, like Ubuntu) and RPM (Red-Hat & derivatives, like Cent-OS). Packages are nice to have, but are just a headache if you don't plan on utilizing the native update manager.
For test & build, I'd personally recommend Gentoo. It's pretty raw, however, so you might want to look into Ubuntu as a distant second choice.
This is an issue for your product management team. Once they have determined that producing a Linux version is a desirable idea (i.e. on a cost-benefit basis), then you will need to find out what distros your customers use or want supported.
In principle you can support any but the more you support the more of a headache it will be, so you want as FEW as possible.
Support as few OS / architecture combinations as your PM thinks you can get away with
Deprecate OSs / architectures as soon as you can
Only take on new ones if premium support customers demand it, or to get big deals, as per your PM's decision.
How hard it is to support them is largely dependent on how complex your product is (esp. dependencies) and how complete its auto-test suite is. Adding more supported OSs ties your hands with respect to library usage, kernel feature usage etc as well as testing, so it's not something you want to be lumbered with long-term.
So in short, it's not a software engineering issue, but a product management one.

Building Linux packages for multiple distributions and versions

My company has a software product that's written in C for a Linux platform, built with autotools and distributed via binary packages. To make the binaries, we first produce a source RPM and then compile the source from the SRPM.
Currently we only provide RPM packages for 64-bit Fedora 10, but we want to start providing packages for multiple Linux distributions - 32-bit as well as 64-bit - and possibly different versions of each distribution as well (e.g. Fedora 11 as well as Fedora 10).
I've heard that the best way to produce builds for multiple Linux flavours is to have a single build server and use a different chrooted environment for each set of packages that you want to build. Does anyone have a good resource that explains this in more detail, maybe with examples of well known projects that use this build mechanism, or have a better alternative to achieve the same goal ?
Maybe you can research the following projects to get started:
Novell Build service
Fedora Koji
You can use LSB appchecker to test your application/dynlib/shell script compatibility. After that you can use RPM for all RPM distribution and use alien for all apt-get distribution and tar.gz for other
Tools like checkinstall will help you to produce packages for different distros. Personally, if you are looking to integrate with existing package management systems, you will also want to host multiple repositories on your servers and provide packages there, then have users configure their package managers to pull the apps off your servers.
Depending on what your software exactly does and which dependencies it has (if any) on local libraries, you may be able to build your software using an older glibc distribution and have it work in many different distributions. This is what we do with InstallBuilder. If you do not have dependencies on specific packages, it is also possible to create RPM or DEB packages that will run on most RPM or DEB-based Linux distros out there. Cross-Linux development, in any case, it is not easy :) Good luck!
This is one of the cases covered by Bob Aiello in this article on build agents. We have several customers who use this approach to build on several platform in parallel.

Resources