Compiling for many linux distributions - linux

in a short I'm gonna release an application written in OCaml and I was planning to distribute it by source code.
The problem is that the OCaml development system is not something light neither so common to have installed so I would like to release it also in a binary way for various operating systems.
Windows makes no problem since I can compile it through cygwin and distribute it with the required dlls
OS X is not a problem too since I can compile it and distribute it easily (no external dependencies from what I've tried)
When arriving to Linux problems arrive since I don't really know which is the best way to compile and distribute it. The program itself doesn't depend on anything (everything is statically linked) but how to cover many distributions?
I have an ubuntu server 10 virtualized with an amd64 architecture, I used this machine to test the program under Linux and everything works fine. Of course if I try to move the binary to a 32bit ubuntu it stops working and I haven't been able to try different distributions... are there tricks to manage this kind of issue? (that seems recurring)
for example:
can I compile both 32 bit and 64 from the same machine?
will a binary compiled under ubuntu run also on other distributions?
which "branches" should I consider when wanting to cover as many distros as possible?

You can generally produce 64 and 32 bit binaries on a 64 bit machine with relative ease - i.e. the distribution usually has the proper support in its compiler packages and you can actually test your build. Note that the OS needs to be 64 bit too, not just the CPU.
Static binaries generally run everywhere, provided adequate support from the kernel and CPU - keep an eye on your compiler options to ensure this. They are your best bet for compatibility. Shared libraries can be an issue. To deal with this, binaries linked with shared libraries can be bundled with those libraries and run with a loader script if necessary.
You should at least target Debian/Ubuntu with dpkg packages, Redhad/Fedora/Mandriva with RPM and SUSE/OpenSUSE with RPM again (I mention these two RPM cases separately because you might need to produce separate packages for these "families" of distributions). You should also provide a .tar.bz2 or a .run installer for the rest.
You can have a look at the options provided by e.g. Oracle for Java and VirtualBox to see how they provide their software.

You could look at building it in the openSUSE Build Service. Although run by openSUSE, this will build packages for:
openSUSE SUSE
Enterprise variants
Mandiva
Fedora
Red Hat Enterprise/CentOS
Debian
Ubuntu

The best solution is to release the source code under a free license. You can package it for a couple distributions yourself (e.g. Debian, Fedora), then cooperate with other people porting it to others. The maintainers will often do most of this work with only a few required upstream changes.

Yes you can compile for both 32 and 64 bits from the same machine :
http://gcc.gnu.org/onlinedocs/gcc/i386-and-x86_002d64-Options.html
Most likely a binary running on Ubuntu will run on other distributions, the only thing you need to worry about if it you are using shared libraries (especially if you use some GUI framework or things like that).
Not sure what you mean by branch, but if you are talking about distribution, I would use the most vanilla Ubuntu distribution...

I'd recommend you just package a 32 and 64-bit binary for .deb and RPM, that way you can hit most of the major distros (Debian, Fedora, openSUSE, Ubuntu).
Just give clear installation instructions regarding dependencies, command-line fu for other distros, etc. and you shouldn't have much a problem just distributing a source tarball.

Related

Universal installers on Linux

I think its a fairly common problem, but I need to have community opinion and so I am posting this question.
Use case: I am trying to create a single package (32bit) for all the linux distros (32bit, 64bit) that I want to support.
Problem: The INSTALLER
Needs to be able to run pre/post install scripts.
Should be able to run on both 32bit and 64 bit machines
Should be able to support older and newer distros (Centos 6 and above)
Should have an online repository for updating packages.
Should be able to run without X server
Should not have any dependency on a software that cannot be installed using standard yum/zypper/apt commands. Should not depend on any non standard repository.
I came across this link:
https://www.reddit.com/r/linux/comments/4ohvur/nix_vs_snap_vs_flatpak_what_are_the_differences/
It lists many alternatives, but none of them seems to satisfy all the above requirements. (Or have I overlooked something)
In addition I looked at the following two alternatives:
QT installer Fwk (needs X to run, if I am right)
self-extracting scripts with tars bundled.
The only solution that fits all the needs is "self-extracting scripts with tars bundled". But it requires a lot of work, effectively managing all the installation/upgrade stuff myself. Before I go ahead with this alternative, can anyone please confirm if he/she has any success in creating a single package for many distros?
I don't believe in the concept of rolling one's own installer with a self-extracting archive overly much. Every distro is different, and should be addressed using their own installation mechanisms. Also, writing your own installer is re-inventing the wheel.
I'd advertise using the packaging methods of all the distros you're targetting. Essentially, one SPEC file is typically enough to support CentOS 6,7, and all modern Fedora versions. Use mock or the copr service to generate all the binary packages for the distros you're targeting; then a debian rules file should be enough to generate Debian, Ubuntu and Mint packages. Add a pacman script if you want to support Arch Linux, too (it's pretty easy).
Admittedly, this way, you end up with a whole bunch of different packages, and not one. However, now you have an installer for each system that actually fits that system, is linked against the libraries available on that distro, and thus, you don't have to include all the dependencies like in a flatpack etc.
Installation from Distribution-specific packages is almost always "smoother" than installation through some self-extracting archive that wasn't actually designed for my specific distro version, so this is probably a big plus for your users. Also, having packages usually makes it very easy and stable to offer an update path, should you decide your software needs patches later on.

Best way to build cross toolchains on Mac OS X

I spent the last three weeks researching about crossdevelopment under Mac OS X. I want to achieve two separate results, but I believe they can be reached through the same path.
I want to
set up distcc to help my old Gentoo laptop using the iMac I recently got at home (OS X 10.6, 64 bit native) which I also use for iOS development, so Xcode 4 tools are already there;
develop my pet project which is an elf kernel for x86, x86_64, and arm (and I'll stop here as it's OT).
So, after a lot of that thinking thing we all do in these cases, I came up the idea that to reach the first goal I need to set up an i686-pc-linux-gnu toolchain (or is it i686-unknown-linux-gnu?) with all the appropriate versions (eg gcc-4.4) and make it callable by distcc. It seems like a reasonable task, but unfortunately there seem to be clearer tools and instructions to build toolchains for obscure archs like sparc or mips, and not a single reasonably updated resource on how to go for x86 the best way. Therefore, first question: is there anybody that succesfully build such a toolchain and feels like sharing the pain? :)
Second goal. My current workbench is made of Gentoo on an i686 laptop (yes, the same as the first goal) with all the regular development stuff, and I use QEMU to test it (its gdb integration is awesome). What I'd really like to do is to keep using the laptop while travelling (I do a lot of commuting) and continue to work and test on the iMac when I'm home (git is awesome in this respect). Hence, second question: is there anybody that have done something like this and wants to share?
I'd really appreciate any input. Seriously.
EDIT I know about MacPorts, crosstool, and crosstool-ng. I tried installing i386-elf-binutils 2.18 from MacPorts just to discover I have 2.20 in my laptop. Also I couldn't get gcc44 to produce i686-pc-linux-gnu elf objects, and using i386-elf-gcc is not an option as I need 4.4 and the packaged one is 4.3.
This is no easy task, specially because you want to cross compile for so many different platforms.
The most used approach is to run a Virtual Machine with the desired OS (e.g. VirtualBox, Parallels, VMWare Fusion) and install your workbench tools to work from it. This is very often used because it's not complex to setup and it also make it easier to write, test and debug code for/from the target system.
Of course, if you search enough you'll find all sorts of hacks/tricks to setup a toolchain on Mac OS X and compile code for other architectures:
One of these uses Buildroot, but that means that there is no official support for Mac OS X.
Another one, also interesting, offers a .dmg package with the tools needed to compile for Linux on MacOS X.
You already mentioned Gentoo, so I think you should take a look at Gentoo Prefix. Gentoo Prefix lets you install a small Gentoo system in a user defined directory (= prefix). From there, you may start a shell which lets you use portage (= Gentoo's package system) which should enable you to install the necessary tools.
I do not know in what shape Prefix on OS X today is, but I was able to install it on a friend's MacBook a year or so ago. If you are interested, I can give further details about the installation process which can be a bit tricky.

How do I build an app for an old linux distribution, and avoid the FATAL: kernel too old error?

I distribute a statically linked binary version of my application on linux. However, on systems with the 2.4 kernel, I get a segfault on startup, and the message: "FATAL: kernel too old."
How can I easily get a version up and running with a 2.4 kernel? Some of the libraries I need aren't even available on old linux distributions circa 2003. Is there an apt-get install or something that will allow me to easily target older kernels?
The easiest way is to simply install VirtualBox (or something similar, e.g. VMWare),Install CentOS 3 or any suitable old distro with a 2.4 kernel and build/test your app on that.
Since you're getting a "kernel too old", chances are you're relying on some features not present in 2.4 kernels so you'll have to trace down and rework that.
The error might simply be caused by linking statically to glibc, you could try linking to glibc dynamically and all your other libs statically, though to be backwards compatible you'd have to build your app on an old glibv system. Using the lsb tools to build could help too
For my use case, I can't statically link my supporting libraries. Also, current Linux distributions seem to make this difficult to accomplish for certain situations. But I needed my application binaries to run on 10-year old Linux systems.
I also didn't want to limit myself to an ancient 10-year old C/C++ compiler. I also found that the hardware I needed to use prevented me from installing a 10 year old Linux distribution for some reason.
So, I did this:
Installed docker.
Within a docker instance, install a 10-year old Linux system (I used Debian's Lenny distribution). This has the added advantage of making this build system available to any other machine that can run docker.
Within the docker instance, build the current GNU compilers (8.3.0 when I did this).
This gave me a modern compiler that compiled binaries that would run on very old Linux systems. I did this for both 32-bit and 64-bit processors.
From there, I created a series of scripts that allowed me to use the docker-contained cross-compiler to build all my supporting libraries. I made sure to set the rpath to my compiled binaries to a path relative to my binaries (using -Wl,-rpath,$ORIGIN/../lib), and a built a script to retrieve any supporting libraries from the compiler, using g++ -print-search-dirs to get the paths, ldd to get the supporting libraries I needed from my binaries, and some aggressive bash scripting to find the supporting libraries existing within the search-dirs from g++, dropping these libs into the rpath I set up.
From there, I package my binary accordingly, with all supporting libs.
Yeah, this is somewhat painful, but it results in a fully functioning binary capable of working on ridiculously old linux systems without having to install different Linux distributions on multiple virtual machines.
I tried creating a proper cross-compiler (native to the current Linux distribution hosting my docker images), but found it too difficult to work with, even with the best tools I could find to help me. Compiling the compiler within a docker image took far less of my time, and worked rather smoothly.

Is ubuntu 9.04 good choice for embedded linux application development?

I want to change linux distro my Development(Host) Machine which I use for embedded development.
I cross-compile applications for many different processors. It is required for me to download different different libraries to evaluate their functionality/Performance/Stability on different devices , as well as on PC.
So Is ubuntu 9.04 a good choice for me?
Thanks,
Sunny.
If you are using gcc or other source based compiler that runs on linux then I would say yes, you want a linux distro, and ubuntu is currently the most popular/best. I would try to avoid distro specific things, drive down the middle of the road and you should be able to use any distro equally well.
That will largely depend on your needs. For an embedded system, I'd go with any distribution that sports a very small footprint and supports the necessary hardware.
Depending on your hardware, Debian might work fine. You could create your image with debootstrap which allows for fairly small customized installs. It still includes apt and other things which might not be desirable, although that could be to your benefit if you need to push out updates.
If you did go with Debian, you could most likely do all your development on Ubuntu and then push to your embedded system.
i use ubuntu for my host system and a chrooted gentoo install for building apps for an embedded target. I found gentoo was a good choice as it is source distributed and easy to select what version of a particular library is installed.
One thing that is good to know is that ubuntu and derivatives uses dash and not bash as /bin/sh. This confuses crosstools and can give you severe headaches.

Building Linux packages for multiple distributions and versions

My company has a software product that's written in C for a Linux platform, built with autotools and distributed via binary packages. To make the binaries, we first produce a source RPM and then compile the source from the SRPM.
Currently we only provide RPM packages for 64-bit Fedora 10, but we want to start providing packages for multiple Linux distributions - 32-bit as well as 64-bit - and possibly different versions of each distribution as well (e.g. Fedora 11 as well as Fedora 10).
I've heard that the best way to produce builds for multiple Linux flavours is to have a single build server and use a different chrooted environment for each set of packages that you want to build. Does anyone have a good resource that explains this in more detail, maybe with examples of well known projects that use this build mechanism, or have a better alternative to achieve the same goal ?
Maybe you can research the following projects to get started:
Novell Build service
Fedora Koji
You can use LSB appchecker to test your application/dynlib/shell script compatibility. After that you can use RPM for all RPM distribution and use alien for all apt-get distribution and tar.gz for other
Tools like checkinstall will help you to produce packages for different distros. Personally, if you are looking to integrate with existing package management systems, you will also want to host multiple repositories on your servers and provide packages there, then have users configure their package managers to pull the apps off your servers.
Depending on what your software exactly does and which dependencies it has (if any) on local libraries, you may be able to build your software using an older glibc distribution and have it work in many different distributions. This is what we do with InstallBuilder. If you do not have dependencies on specific packages, it is also possible to create RPM or DEB packages that will run on most RPM or DEB-based Linux distros out there. Cross-Linux development, in any case, it is not easy :) Good luck!
This is one of the cases covered by Bob Aiello in this article on build agents. We have several customers who use this approach to build on several platform in parallel.

Resources